Skip to the content.

From 25 items, 14 important content pieces were selected


  1. Trump Fires All 24 Members of National Science Board ⭐️ 9.0/10
  2. Google Plans Up to $40B Investment in Anthropic ⭐️ 9.0/10
  3. Qwen3.6-27B runs at 80 tps on single RTX 5090 with vLLM ⭐️ 8.0/10
  4. GLM-5.1 Runs Locally at 40 tps, 2000+ pp/s ⭐️ 8.0/10
  5. DeepSeek V4 Update Sparks Community Debate ⭐️ 8.0/10
  6. TeamViewer 13/14 to End Public Internet Connections by 2026 ⭐️ 8.0/10
  7. China’s Q1 2026 GDP Growth Masks Rising Youth and Migrant Unemployment ⭐️ 8.0/10
  8. OpenAI Launches GPT-5.5 Biosecurity Bug Bounty Program ⭐️ 8.0/10
  9. New 10GbE USB Adapters: Smaller, Cooler, Cheaper ⭐️ 7.0/10
  10. OpenAI Confirms No Separate GPT-5.5 Codex Model ⭐️ 7.0/10
  11. OpenAI Releases GPT-5.5 Prompting Guide ⭐️ 7.0/10
  12. Xiaomi MiMo V2.5 Pro Debuts on AI Index, Weights Coming ⭐️ 7.0/10
  13. FCC Expands Router Ban to Mobile Hotspots and CPE Devices ⭐️ 7.0/10
  14. China Regulates Online Financial Product Marketing ⭐️ 7.0/10

Trump Fires All 24 Members of National Science Board ⭐️ 9.0/10

President Trump has fired all 24 members of the National Science Board, the oversight body of the National Science Foundation (NSF), effectively removing the entire board that sets NSF policies and advises the president and Congress on science and engineering issues. This unprecedented move raises serious concerns about political interference in science, as the National Science Board plays a critical role in ensuring the independence and integrity of NSF’s research funding decisions. The firing could disrupt NSF operations and undermine trust in U.S. science policy. The National Science Board consists of 24 presidentially appointed members plus the NSF director as an ex officio member; all 24 appointed members have been terminated. The board has statutory responsibilities including policy oversight of NSF and advisory duties to the president and Congress.

hackernews · skullone · Apr 25, 22:39

Background: The National Science Foundation (NSF) is a major U.S. government agency that funds fundamental research and education in science and engineering. The National Science Board, established by the NSF Act of 1950, provides independent oversight and policy guidance to NSF, and also serves as an advisory body to the president and Congress. Board members are typically distinguished scientists and engineers appointed to staggered six-year terms.

References

Discussion: Commenters expressed shock and concern, with many viewing the firing as a move to undermine scientific integrity. Some questioned the board’s importance, while others speculated about future appointments of political allies. A few sought a silver lining, hoping a future administration could rebuild the system better.

Tags: #NSF, #science policy, #US politics, #research funding


Google Plans Up to $40B Investment in Anthropic ⭐️ 9.0/10

Google plans to invest up to $40 billion in AI company Anthropic, including $10 billion in cash at a $350 billion valuation and up to $30 billion more based on performance targets, plus 5 gigawatts of compute capacity via Google Cloud over five years. This massive investment underscores Google’s strategic bet on Anthropic as a key AI partner, potentially reshaping the competitive landscape against rivals like OpenAI and Microsoft, and signals the growing importance of compute resources in AI development. Anthropic, which develops the Claude AI model and the Claude Code coding tool, is considering an IPO as early as October. Amazon recently added another $5 billion to its investment in Anthropic.

telegram · zaihuapd · Apr 25, 11:02

Background: Anthropic is an AI safety and research company founded by former OpenAI employees, known for its Claude series of large language models. Google has been a prior investor, and this new deal includes significant cloud computing resources using Google’s custom TPU chips, which are specialized for AI workloads.

References

Tags: #AI, #investment, #Anthropic, #Google, #cloud computing


Qwen3.6-27B runs at 80 tps on single RTX 5090 with vLLM ⭐️ 8.0/10

A Reddit user demonstrated running the Qwen3.6-27B model at approximately 80 tokens per second with a 218k context window on a single RTX 5090 GPU using vLLM 0.19.1rc1, and shared a detailed recipe for reproducing the setup. This achievement shows that large 27B-parameter models can now run efficiently on consumer-grade hardware, making advanced LLM inference more accessible to individuals and small teams without expensive server infrastructure. The model uses NVFP4 quantization and Multi-Token Prediction (MTP) to reduce memory and improve throughput. The recipe is based on a previous post for Qwen3.5-27B, which achieved 77 tps on the same hardware.

reddit · r/LocalLLaMA · Kindly-Cantaloupe978 · Apr 25, 10:21

Background: NVFP4 is a 4-bit floating-point quantization format that preserves dynamic range better than integer quantization. Multi-Token Prediction (MTP) is a training strategy where the model learns to predict multiple future tokens simultaneously, which can be combined with speculative decoding to speed up inference. vLLM is an open-source high-throughput inference engine that supports various optimizations for LLM serving.

References

Discussion: The post received high engagement (266 upvotes, 97% ratio), indicating strong community interest. Comments likely discuss the impressive performance and the practicality of running large models on consumer GPUs.

Tags: #LLM, #vLLM, #RTX 5090, #Qwen, #local inference


GLM-5.1 Runs Locally at 40 tps, 2000+ pp/s ⭐️ 8.0/10

A user achieved 40 tokens/sec generation and over 2000 prefilled tokens/sec running the 478B-parameter GLM-5.1 model locally on 4 NVIDIA RTX 6000 Pro GPUs using REAP-NVFP4 quantization and custom SGLang patches. This demonstrates that very large open-source models (478B parameters) can be run efficiently on consumer-grade hardware, potentially democratizing access to frontier-level LLMs for local inference and development. The setup uses 4 RTX 6000 Pro GPUs (limited to 350W each) with REAP-NVFP4 quantization (which prunes experts from 256 to 154) and two small patches to SGLang for Blackwell architecture support. Throughput degrades gracefully with context depth, from 2229 pp/s at 0 prefilled to 863 pp/s at 64K context.

reddit · r/LocalLLaMA · val_in_tech · Apr 25, 16:31

Background: GLM-5.1 is a large language model with 478 billion parameters, making it one of the largest open-weight models. REAP-NVFP4 is a quantization technique that combines expert pruning (REAP) with NVIDIA’s FP4 format to reduce model size and memory bandwidth requirements. SGLang is a high-performance serving framework for LLMs, and the RTX 6000 Pro is NVIDIA’s Blackwell-architecture workstation GPU with 96 GB of memory.

References

Discussion: The Reddit community reacted positively (92% upvote ratio), with users impressed by the throughput and asking about further optimization and concurrency settings. The author noted that inference software is still under-optimized for these cards and expects true potential to unfold soon.

Tags: #LLM inference, #local LLM, #quantization, #hardware optimization, #GLM


DeepSeek V4 Update Sparks Community Debate ⭐️ 8.0/10

DeepSeek has released an update to its V4 model, continuing its tradition of open-weight releases and detailed research papers, while competitors like Kimi, GLM, and Qwen have moved toward more closed practices. DeepSeek 对开放性的承诺对 AI 社区至关重要,它提供了可访问的高性能模型和透明的研究,推动了创新,与行业向封闭模型发展的趋势形成对比。 The V4 Pro model is approximately 2.5x larger than V3.2 (1.6T vs 0.67T parameters), and some users report that it uses significantly more tokens for generation, suggesting a decrease in intelligence density compared to competitors like GPT-5.4 and GPT-5.5.

reddit · r/LocalLLaMA · techlatest_net · Apr 25, 10:10

Background: Open-weight models release the trained neural network parameters, allowing others to run and fine-tune the model. Base models are pre-trained on large text corpora and can be further fine-tuned for specific tasks. DeepSeek has been a leader in open-weight LLMs, consistently publishing research and releasing models promptly.

References

Discussion: The community is divided: some praise DeepSeek for maintaining openness while others criticize the V4 Pro’s increased token usage and lower intelligence density compared to GPT-5.4 and GPT-5.5, noting that DeepSeek requires roughly 10x more tokens for similar performance.

Tags: #DeepSeek, #LLM, #AI, #Open Source, #Model Update


TeamViewer 13/14 to End Public Internet Connections by 2026 ⭐️ 8.0/10

TeamViewer announced that versions 13 and 14 will reach end of life on October 31, 2026, after which they will no longer support public internet connections via official servers, only local network functionality. This forces users who purchased perpetual licenses for these versions to switch to a subscription model to continue using remote access over the internet, raising concerns about vendor lock-in and the devaluation of perpetual licenses. Perpetual license holders cannot upgrade to newer versions for free; TeamViewer offers migration discounts but does not provide a free path. The company cites security improvements as the reason for the change.

telegram · zaihuapd · Apr 25, 05:43

Background: TeamViewer is a popular remote desktop software that historically offered perpetual licenses. In recent years, the company has shifted to a subscription-only model, and older versions have been phased out. Version 12 support ended earlier, and now versions 13 and 14 are being deprecated.

References

Discussion: Community discussions on Spiceworks and TeamViewer forums express frustration, with users criticizing the forced migration and high subscription costs. Some suggest alternatives like AnyDesk or RustDesk.

Tags: #TeamViewer, #software licensing, #remote desktop, #EOL


China’s Q1 2026 GDP Growth Masks Rising Youth and Migrant Unemployment ⭐️ 8.0/10

China’s Q1 2026 GDP grew 5% year-on-year, but the surveyed urban unemployment rate for ages 25-29 rose to 7.7% (highest since data release in Dec 2023) and for migrant agricultural workers to 5.7% (highest since COVID-19 ended), according to the National Bureau of Statistics. This structural disconnect between GDP growth and job creation signals that China’s economic recovery is not translating into sufficient employment, especially for youth and migrant workers, which could fuel social instability and force policy adjustments. The 25-29 age group’s unemployment rose from 6.8% to 7.7% over Q1, reflecting structural deterioration as these workers are more experienced and have higher labor force participation. Meanwhile, migrant worker unemployment hit a near-three-year high due to weakness in construction, manufacturing, and services.

telegram · zaihuapd · Apr 25, 14:45

Background: China’s surveyed urban unemployment rate is a key labor market indicator calculated by the National Bureau of Statistics based on sample surveys. The employment elasticity of GDP measures how much employment increases per percentage point of GDP growth; capital-intensive industries like infrastructure and high-end manufacturing have low elasticity, meaning they create fewer jobs per unit of output.

References

Discussion: The editor’s note expresses disagreement with the expert view that GDP growth is driven by capital-intensive sectors with low employment elasticity, but retains the original text for readers’ deep thinking. No other community comments are provided.

Tags: #China economy, #unemployment, #labor market, #structural issues, #youth employment


OpenAI Launches GPT-5.5 Biosecurity Bug Bounty Program ⭐️ 8.0/10

OpenAI has launched a biosecurity bug bounty program for GPT-5.5, offering a $25,000 reward for the first universal jailbreak that bypasses five biosecurity challenges without triggering safeguards. The program is invitation-only, with applications open from April 23 to June 22, 2026, and testing from April 28 to July 27, 2026. This program highlights the growing concern over AI models being misused for biosecurity threats, such as generating instructions for harmful biological agents. By incentivizing researchers to find vulnerabilities, OpenAI aims to strengthen safety measures before broader deployment, setting a precedent for responsible AI release practices. The bounty specifically targets GPT-5.5 running in Codex Desktop, and requires a universal jailbreak that works across all five biosecurity challenges. Participants must sign a non-disclosure agreement and conduct evaluations on a dedicated platform.

telegram · zaihuapd · Apr 25, 16:36

Background: GPT-5.5 is OpenAI’s latest large language model, emphasizing speed, accuracy, and real-world use. A universal jailbreak is a consistent attack strategy that bypasses safety guardrails across multiple queries, posing significant risks for enabling real-world harm. Biosecurity bug bounties are part of broader efforts to test and improve safeguards against misuse of advanced AI.

References

Tags: #AI Safety, #Bug Bounty, #OpenAI, #Biosecurity, #GPT-5.5


New 10GbE USB Adapters: Smaller, Cooler, Cheaper ⭐️ 7.0/10

New 10GbE USB adapters based on the Realtek RTL8159 chip are now available, offering smaller size, lower heat, and lower cost compared to previous Thunderbolt-based solutions. However, performance varies significantly depending on the host’s USB standard support and interrupt handling capabilities. This makes 10GbE networking more accessible to a wider range of users, especially those with newer USB4 or Thunderbolt 4 ports, potentially accelerating adoption of high-speed wired networking in home and small office environments. The adapters use the Realtek RTL8159 chip and support USB 3.2 Gen 2x2 (20 Gbps) for full 10GbE speeds, but many hosts lack this standard, resulting in a downgrade to 10 Gbps USB 3.2 Gen 2. Additionally, interrupt rates on lower-powered devices like MacBook Neo can limit throughput, and iperf3 is single-threaded by default, which may underreport performance.

hackernews · calcifer · Apr 25, 05:56

Background: 10GbE (10 Gigabit Ethernet) is a high-speed networking standard offering 10 Gbps data transfer, commonly used in data centers and professional workflows. Traditionally, 10GbE adapters for laptops used Thunderbolt 3/4, which are expensive and bulky. Newer USB-based adapters leverage the USB4 and USB 3.2 Gen 2x2 standards to provide similar speeds at lower cost, but compatibility and performance depend on the host hardware.

References

Discussion: The community discussion highlights technical nuances: one commenter notes that iperf3 is single-threaded by default and suggests using -P 4 for multi-threaded testing to better measure performance. Another expresses confusion over USB version naming, while a third points out that Apple hardware lacks USB 3.2 Gen 2x2 support, so adapters downgrade to 10 Gbps. A link to a Framework expansion card is also shared.

Tags: #networking, #hardware, #USB, #10GbE, #benchmark


OpenAI Confirms No Separate GPT-5.5 Codex Model ⭐️ 7.0/10

OpenAI’s Romain Huet confirmed that GPT-5.5 will not have a separate Codex model, as coding capabilities have been unified into the main model since GPT-5.4. This unification simplifies OpenAI’s product lineup and signals a strategic shift toward integrated multimodal and agentic capabilities, potentially improving developer experience and model efficiency. GPT-5.5 shows strong gains in agentic coding, computer use, and any task on a computer, building on the unification started in GPT-5.4.

rss · Simon Willison · Apr 25, 12:06

Background: OpenAI Codex was a specialized model for translating natural language into code, used in tools like GitHub Copilot. Agentic coding refers to AI agents that autonomously perform software development tasks. By merging Codex into the main GPT model, OpenAI aims to provide a single model capable of both general reasoning and code generation.

References

Tags: #openai, #gpt, #ai, #llms, #generative-ai


OpenAI Releases GPT-5.5 Prompting Guide ⭐️ 7.0/10

OpenAI has released an official prompting guide for GPT-5.5, now available in the API, with tips including sending short user-visible updates during multi-step tasks to improve perceived responsiveness. This guide helps developers optimize prompts for GPT-5.5, which OpenAI recommends treating as a new model family rather than a drop-in replacement, potentially improving application performance and user experience. OpenAI advises starting with a fresh baseline prompt instead of migrating existing prompts from older models, and suggests using the Codex app with the command ‘$openai-docs migrate this project to gpt-5.5’ to upgrade code.

rss · Simon Willison · Apr 25, 04:13

Background: GPT-5.5 is the latest large language model from OpenAI, succeeding GPT-5.2 and GPT-5.4. Prompting guides provide best practices for interacting with AI models to achieve desired outputs, and are crucial for developers building applications on top of these models.

References

Tags: #GPT-5.5, #prompting, #OpenAI, #API, #LLM


Xiaomi MiMo V2.5 Pro Debuts on AI Index, Weights Coming ⭐️ 7.0/10

Xiaomi’s MiMo V2.5 Pro model has been ranked 54th on the Artificial Analysis Intelligence Index, and the company hinted that model weights will be released soon. This marks Xiaomi’s entry into the competitive AI leaderboard with a 1-trillion-parameter MoE model, and the potential open-weight release could significantly impact the local LLM community by enabling self-hosting and fine-tuning. MiMo V2.5 Pro is a multimodal, agentic LLM with 1 million context length, designed for complex software engineering and long-horizon tasks. It entered public beta on April 2026.

reddit · r/LocalLLaMA · Nunki08 · Apr 25, 11:33

Background: The Artificial Analysis Intelligence Index is a composite benchmark that aggregates ten challenging evaluations to measure AI capabilities across mathematics, science, coding, and reasoning. Xiaomi’s MiMo series is a family of large language models developed by the Chinese electronics giant, with the V2.5 Pro being its most capable model to date.

References

Discussion: The Reddit community showed high engagement with 342 upvotes and a 96% upvote ratio, expressing excitement about the potential open-weight release. Many users discussed the implications for self-hosting and competition with other open-weight models.

Tags: #Xiaomi, #MiMo, #AI model, #open weights, #LLM


FCC Expands Router Ban to Mobile Hotspots and CPE Devices ⭐️ 7.0/10

The FCC has updated its ban on foreign-made Wi-Fi routers to now include consumer mobile hotspots (MiFi) and residential LTE/5G CPE devices. The ban applies only to new equipment applications, and some models are exempted until October 1, 2027. This expansion tightens supply chain restrictions on networking equipment, potentially affecting availability and pricing of mobile hotspots and fixed wireless access devices in the US. Consumers and businesses relying on these devices may face limited choices or higher costs. The ban does not affect existing approved models, smartphones with hotspot capabilities, or enterprise-grade equipment. The FCC has granted conditional exemptions to some manufacturers like Netgear, allowing sales of certain products until October 1, 2027.

telegram · zaihuapd · Apr 25, 09:32

Background: The FCC’s Covered List identifies communications equipment that poses a national security risk, banning its authorization for use in the US. The original ban targeted foreign-made routers, and this update extends to mobile hotspots and CPE (Customer Premises Equipment) like LTE/5G fixed wireless terminals. CPE refers to devices at the customer’s location that connect to a service provider’s network, such as modems, routers, and gateways.

References

Tags: #FCC, #regulation, #routers, #hotspots, #CPE


China Regulates Online Financial Product Marketing ⭐️ 7.0/10

Eight Chinese government departments issued the ‘Measures for the Administration of Online Marketing of Financial Products,’ effective September 30, 2026, requiring payment tools to be displayed separately from loan products and banning misleading marketing language. This regulation directly impacts major credit payment products like Huabei, Baitiao, and Yuefu, potentially reshaping how fintech companies market and integrate financial services in e-commerce and daily payment scenarios. Non-bank payment institutions must no longer list loan products as payment options; checkout interfaces must prioritize payment tools. The rules also prohibit phrases like ‘low threshold,’ ‘instant approval,’ and ‘low interest rate’ in loan marketing.

telegram · zaihuapd · Apr 25, 10:03

Background: Huabei (Ant Group), Baitiao (JD Finance), and Yuefu (Meituan) are popular ‘credit payment’ products that allow users to pay later or in installments. Previously, these products were often displayed alongside payment tools like bank cards or Alipay balance, blurring the line between payment and credit. The new regulation aims to protect consumers by ensuring clear distinction and preventing misleading marketing.

References

Discussion: One commenter noted that Huabei can be forcibly closed by contacting Alipay customer service, after which it will no longer appear. This suggests some users prefer to opt out of such credit products.

Tags: #fintech, #regulation, #China, #payments, #consumer finance