Skip to the content.

From 37 items, 14 important content pieces were selected


  1. ByteDance Plans Overseas Deployment of 36,000 Nvidia B200 Chips to Accelerate AI Development ⭐️ 9.0/10
  2. Qatar helium shutdown threatens global chip supply chain within two weeks ⭐️ 8.0/10
  3. Anthropic makes 1M context window generally available for Claude Opus 4.6 and Sonnet 4.6 at standard pricing ⭐️ 8.0/10
  4. Shopify CEO uses AI autoresearch to achieve 53% faster Liquid template engine performance ⭐️ 8.0/10
  5. Critical AppArmor vulnerabilities enable root privilege escalation and denial-of-service attacks ⭐️ 8.0/10
  6. New timing side-channel vulnerabilities discovered in Linux kernel page cache ⭐️ 8.0/10
  7. Lemonade v10 adds Linux NPU support and expands multi-modal AI capabilities ⭐️ 8.0/10
  8. Shanghai’s first brain-computer interface surgery enables paralyzed patient to drink water with thought-controlled gloves ⭐️ 8.0/10
  9. Can I Run AI Locally? A Practical Guide to Local AI Deployment ⭐️ 7.0/10
  10. Investigation reveals coordinated corporate push behind US age-verification bills ⭐️ 7.0/10
  11. Reddit discussion critiques the value of LLM benchmarking papers in academic conferences ⭐️ 7.0/10
  12. Blind user seeks local LLM alternatives to Claude Code and Codex for accessibility tasks ⭐️ 7.0/10
  13. Apple may globally reduce App Store commissions from 30% to 20% ⭐️ 7.0/10
  14. Research finds Alipay DeepLink and JSBridge vulnerability could leak personal information ⭐️ 7.0/10

ByteDance Plans Overseas Deployment of 36,000 Nvidia B200 Chips to Accelerate AI Development ⭐️ 9.0/10

ByteDance is partnering with Southeast Asian cloud service provider Aolani Cloud to deploy approximately 500 Nvidia Blackwell computing systems containing about 36,000 B200 chips in Malaysia, with hardware investments potentially exceeding $2.5 billion. The company plans to use this computing power for overseas AI research and development while supporting global AI service demands. This represents a major strategic investment in AI infrastructure that significantly escalates global AI competition, particularly as Chinese tech companies seek access to advanced AI chips through overseas deployment. The massive hardware commitment demonstrates ByteDance’s serious ambitions in AI development and could reshape the competitive landscape for global AI services. The Nvidia B200 chip features 90GB of VRAM and consumes up to 1200W of power, representing a significant increase from previous generations. The deployment through Aolani Cloud in Malaysia allows ByteDance to access advanced AI hardware despite export restrictions affecting direct purchases by Chinese companies.

telegram · zaihuapd · Mar 13, 08:45

Background: Nvidia’s Blackwell architecture is the company’s latest GPU microarchitecture, succeeding the Hopper and Ada Lovelace architectures. The Blackwell platform offers significant performance improvements, with Nvidia claiming a 4x performance boost over the current Hopper lineup. Aolani Cloud is a Malaysia-based cloud service provider specializing in AI-centric cloud infrastructure and high-performance computing solutions.

References

Tags: #AI Hardware, #Nvidia, #ByteDance, #Cloud Computing, #AI Infrastructure


Qatar helium shutdown threatens global chip supply chain within two weeks ⭐️ 8.0/10

Qatar’s shutdown of its liquefied natural gas (LNG) production, which supplies approximately 30% of global helium, has created a critical shortage that could disrupt semiconductor manufacturing worldwide within two weeks. Major chipmakers like SK hynix have diversified supplies, while TSMC is monitoring the situation but doesn’t anticipate immediate impact. This disruption highlights critical vulnerabilities in semiconductor supply chains, where geopolitical concentration of essential materials like helium creates systemic risks. With South Korea and Taiwan each accounting for 18% of global semiconductor production capacity, any manufacturing slowdown could have cascading economic and technological consequences, particularly affecting AI memory chip demand. Helium is essential for semiconductor manufacturing due to its cryogenic properties, thermal conductivity, and chemical inertness, particularly for wafer cooling where no viable alternatives exist. While some companies are exploring recycling methods and long-term contracts to improve resilience, the immediate loss of 30% of global supply creates a tight timeline for chipmakers to secure alternative sources.

hackernews · johnbarron · Mar 13, 12:31

Background: Helium is a byproduct of natural gas extraction, primarily obtained during LNG production. Qatar operates the world’s largest helium processing facility through Qatar Gas and Ross Qatar, making it a dominant global supplier. In semiconductor manufacturing, helium is used throughout the production line for cooling, purging, and as a carrier gas due to its unique physical properties that enable precise temperature control and contamination prevention during chip fabrication.

References

Discussion: Community comments reflect diverse concerns including personal fears about PC replacement costs due to chip shortages, geopolitical implications of supply chain dependencies, and technical questions about helium recycling in manufacturing. Some users noted the US strategic helium reserve divestment in 2024, while others highlighted broader supply chain vulnerabilities beyond helium, such as nitrogen fertilizers.

Tags: #semiconductors, #supply-chain, #manufacturing, #geopolitics, #materials-science


Anthropic makes 1M context window generally available for Claude Opus 4.6 and Sonnet 4.6 at standard pricing ⭐️ 8.0/10

Anthropic has made the 1 million token context window generally available for both Claude Opus 4.6 and Claude Sonnet 4.6 models, applying standard pricing across the full 1M window without any long-context premium. This contrasts with competitors like OpenAI and Gemini, which charge additional fees for prompts exceeding certain token thresholds (272,000 for GPT-5.4 and 200,000 for Gemini 3.1 Pro). This move significantly lowers the cost barrier for long-context AI applications, making it more accessible for developers and enterprises to process large documents, codebases, or complex conversations without worrying about premium pricing tiers. It represents a competitive shift in the LLM market where Anthropic is differentiating itself through pricing transparency and value for long-context use cases. Both Opus 4.6 and Sonnet 4.6 previously supported 200K context windows with 1M token windows available in beta, but now the 1M context is generally available. According to Anthropic’s documentation, Opus 4.6 supports 128K max output tokens while Sonnet 4.6 supports 64K max output tokens, both with extended thinking capabilities.

rss · Simon Willison · Mar 13, 18:29

Background: Context window refers to the number of tokens (text units) an LLM can process in a single prompt, determining how much information it can consider at once. Longer context windows enable models to handle larger documents, maintain longer conversations, or analyze complex codebases more effectively. Many LLM providers implement tiered pricing where usage beyond certain context thresholds incurs higher per-token costs, creating a ‘long-context premium’ for extended inputs.

References

Tags: #llms, #ai, #pricing, #anthropic, #generative-ai


Shopify CEO uses AI autoresearch to achieve 53% faster Liquid template engine performance ⭐️ 8.0/10

Shopify CEO Tobias Lütke submitted a pull request to the Liquid Ruby template engine repository that achieved 53% faster parse+render performance and 61% fewer allocations through dozens of micro-optimizations discovered using a variant of Andrej Karpathy’s autoresearch AI system. The optimization process involved running approximately 120 automated experiments over two days, resulting in 93 commits with specific improvements like replacing StringScanner with byteindex searching and caching small integer conversions. This demonstrates how AI-assisted development tools like autoresearch can uncover significant performance improvements in mature, widely-used open source projects that have already been optimized by hundreds of contributors over decades. The approach could influence software engineering practices by making systematic performance optimization more accessible, especially for developers in high-interruption roles who can leverage coding agents to maintain productive coding workflows. The optimizations included replacing the StringScanner tokenizer with String#byteindex (40% faster searching, 12% parse time reduction), implementing pure-byte parsing to eliminate costly StringScanner resets, and caching to_s conversions for integers 0-999 to avoid allocations. The project leveraged a robust test suite of 974 unit tests and used the Pi coding agent with a custom pi-autoresearch plugin to execute benchmark-driven experiments.

rss · Simon Willison · Mar 13, 03:44

Background: Liquid is an open-source template language created by Shopify and written in Ruby, first developed in 2005 and inspired by Django templates. It has been in production use at Shopify since 2006 and is widely adopted in Ruby on Rails applications for rendering dynamic content. Autoresearch is Andrej Karpathy’s system that enables AI coding agents to run hundreds of semi-autonomous experiments to discover effective techniques, originally developed for optimizing training of models like nanochat.

References

Tags: #performance-optimization, #ruby, #ai-assisted-development, #open-source, #template-engines


Critical AppArmor vulnerabilities enable root privilege escalation and denial-of-service attacks ⭐️ 8.0/10

Qualys disclosed multiple critical vulnerabilities in AppArmor, collectively named ‘CrackArmor,’ which include a confused-deputy flaw that allows unprivileged users to manipulate security profiles via pseudo-files, bypass user-namespace restrictions, and execute arbitrary code within the kernel. These vulnerabilities facilitate local privilege escalation to root through interactions with tools like Sudo and Postfix, alongside denial-of-service attacks via stack exhaustion and Kernel Address Space Layout Randomization (KASLR) bypasses via out-of-bounds reads. This matters because AppArmor is a widely-used Linux security module deployed in many Debian-based distributions and other Linux systems to enforce mandatory access controls on applications. Successful exploitation could allow attackers to gain complete system control (root access) from an unprivileged local position, compromise container isolation, and disrupt system availability through denial-of-service attacks, affecting numerous production environments and cloud deployments. The vulnerabilities specifically involve manipulation of security profiles through pseudo-files, which can bypass user-namespace protections typically used in container environments. The KASLR bypass technique uses out-of-bounds reads to defeat kernel address randomization, while stack exhaustion attacks can crash systems by consuming all available stack memory.

rss · LWN.net · Mar 13, 14:02

Background: AppArmor is a Linux Security Module (LSM) that implements mandatory access control by confining programs to a limited set of resources through security profiles. A confused-deputy flaw occurs when a component with elevated privileges (the deputy) is tricked into performing actions on behalf of an unprivileged user, leading to unauthorized access. Kernel Address Space Layout Randomization (KASLR) is a security feature that randomizes kernel memory addresses to make exploitation more difficult by preventing attackers from knowing where specific code resides.

References

Tags: #security, #vulnerabilities, #AppArmor, #Linux, #privilege-escalation


New timing side-channel vulnerabilities discovered in Linux kernel page cache ⭐️ 8.0/10

Researchers Sudheendra Raghav Neela, Jonas Juffinger, Lukas Maar, and Daniel Gruss have identified new timing side-channel vulnerabilities in the Linux kernel’s page cache, published in March 2026. These vulnerabilities allow the same class of attacks that were partially addressed in 2019, including through the cachestat() system call added in 2023. This matters because the page cache is fundamental to Linux performance and is shared across privilege levels, making timing information accessible to attackers who can exploit it for serious security breaches. These vulnerabilities could enable attacks like defeating address-space-layout randomization (ASLR) or inferring keystroke timing to reconstruct typed text, highlighting ongoing challenges in securing shared kernel resources. The vulnerabilities stem from incomplete fixes to the mincore() system call and the newer cachestat() system call, which leak timing information about page presence and access. Additionally, basic timing measurements of page read latency can be exploited without specific system calls, as pages not in cache take longer to read.

rss · LWN.net · Mar 13, 13:59

Background: Timing side-channel attacks exploit variations in execution time to infer sensitive information, such as cryptographic keys or memory access patterns. The Linux page cache stores recently accessed file data in RAM to speed up reads and writes, but its shared nature across processes can leak timing data. The mincore() system call allows programs to check if pages are in the cache, and previous fixes in 2019 aimed to prevent it from leaking information about unmapped pages.

References

Tags: #Linux Kernel, #Security, #Side-Channel Attacks, #Page Cache, #Systems Research


Lemonade v10 adds Linux NPU support and expands multi-modal AI capabilities ⭐️ 8.0/10

Lemonade v10 was released this week, introducing Linux support for AMD NPUs (Neural Processing Units) and expanding multi-modal capabilities including image generation/editing, transcription, and speech generation accessible through a single base URL. The release also includes a control center web and desktop app for managing models and backends, with robust support for Ubuntu, Arch, Debian, Fedora, and Snap. This matters because it enables local AI applications to leverage AMD NPU hardware acceleration on Linux systems, significantly improving performance and efficiency for multi-modal AI tasks like image and speech processing. It enhances the local AI ecosystem by making it easier to deploy portable, cross-platform applications that reduce reliance on cloud services. The Linux NPU support requires specific setup steps on Ubuntu 24.04, including installing rocm-dkms/rocm-utils, configuring the amdgpu module with ‘options amdgpu npt=3’, and setting environment variables like HIP_VISIBLE_DEVICES=0 and LEMONADE_BACKEND=npu. The release builds on Lemonade v9’s C++ implementation from four months ago and is part of the AMD Lemonade Developer Challenge, which offers high-end Strix Halo laptops as prizes.

reddit · r/LocalLLaMA · jfowers_amd · Mar 13, 17:49

Background: Lemonade is a local AI framework that enables running language models and multi-modal AI applications on personal devices without cloud dependency. An NPU (Neural Processing Unit) is a specialized hardware accelerator designed for AI inference tasks, such as those in AMD client APUs, offering improved performance and efficiency over general-purpose CPUs or GPUs. Multi-modal AI combines different data types like text, images, and audio to enhance decision-making and user interactions.

References

Discussion: Community comments express excitement and gratitude for the Linux NPU support, with users sharing technical setup tips for Ubuntu and inquiring about optimization for Strix Halo hardware. There are requests for guides on model conversion to Hybrid mode and comparisons to other tools like LM Studio, highlighting practical concerns and enthusiasm for the advancements.

Tags: #Linux, #NPU, #Local AI, #Multi-modal AI, #AMD


Shanghai’s first brain-computer interface surgery enables paralyzed patient to drink water with thought-controlled gloves ⭐️ 8.0/10

At the World Brain-Computer Interface Joint Conference, Professor Mao Ying from Huashan Hospital disclosed that Shanghai’s first brain-computer interface surgery successfully enabled a paralyzed patient to drink water using thought-controlled gloves. The surgery employed intraoperative functional localization technology that significantly reduced surgical time. This represents a significant advancement in neuroprosthetics, demonstrating real-world functional restoration for paralyzed individuals through brain-computer interface technology. The reduced surgical time through intraoperative localization could make such procedures more accessible and safer for patients worldwide. The system includes a coin-sized implant placed in the patient’s skull to capture neural signals from the sensorimotor cortex, combined with an external glove device controlled by brain signals. The patient had been paralyzed for four years due to cervical dislocation from a car accident.

telegram · zaihuapd · Mar 13, 09:30

Background: Brain-computer interfaces (BCIs) acquire brain signals and translate them into commands for external devices without using normal neuromuscular pathways. Intraoperative functional localization technologies like functional MRI and cortical mapping help surgeons precisely identify brain regions during surgery, improving accuracy and reducing procedure time. Neuroprosthetics combine BCIs with assistive devices to restore lost functions in patients with neurological conditions.

References

Tags: #brain-computer interface, #medical technology, #neurosurgery, #assistive technology, #neuroprosthetics


Can I Run AI Locally? A Practical Guide to Local AI Deployment ⭐️ 7.0/10

A discussion has emerged about running AI models locally, focusing on hardware requirements, model selection, and practical implementation challenges. The conversation includes technical insights about model architectures like MoE versus dense models and real-world experiences with local deployment tools. This matters because local AI deployment is becoming mainstream in 2025, offering significant cost savings (up to $300-500/month in API costs) while providing better privacy, customization, and reduced latency compared to cloud-based solutions. It affects developers, researchers, and organizations looking to implement AI without relying on external APIs. The discussion highlights that MoE models like GPT-OSS-20B can produce more tokens per second on the same hardware compared to dense models, as they only activate a subset of parameters per token. However, they still require enough VRAM to fit the entire model size, creating a trade-off between performance and memory requirements.

hackernews · ricardbejarano · Mar 13, 12:46

Background: Local AI deployment refers to running machine learning models directly on personal or organizational hardware rather than through cloud-based APIs. This approach has gained popularity due to privacy concerns, cost savings, and the availability of open-source tools like Ollama and LocalAI that simplify setup. Key considerations include hardware specifications (especially VRAM), model architecture choices, and performance metrics like tokens per second.

References

Discussion: Community members share practical experiences with local AI deployment, including recommendations for small models like Qwen3.5:9B for embedded applications and frustrations about the lack of clear guidance for model selection based on hardware constraints. Some suggest improvements to tools like adding support for higher memory configurations and reverse query functionality to compare hardware performance across models.

Tags: #AI, #Local Deployment, #Machine Learning, #Hardware, #Open Source


Investigation reveals coordinated corporate push behind US age-verification bills ⭐️ 7.0/10

A Reddit user published an extensive investigation that traces the companies behind US state age-verification bills, revealing a coordinated influence operation using public records like IRS 990 filings and WHOIS lookups. The investigation found this operation is building surveillance infrastructure at the operating system level while the company behind it faces no new requirements for its own platforms. This matters because it exposes how corporate interests are shaping privacy legislation to create surveillance infrastructure while avoiding accountability, potentially establishing precedents for widespread digital monitoring under the guise of child protection. The findings highlight significant implications for digital privacy rights and the integrity of legislative processes in technology policy. The investigation analyzed $2 billion in nonprofit grants and 45 state bills using multiple data sources including Senate lobbying disclosures, state ethics databases, and Wayback Machine archives. A key finding is that the company pushing these bills faces zero new requirements for its own platforms despite advocating for surveillance infrastructure in legislation.

rss · LWN.net · Mar 13, 14:09

Background: Age-verification bills are proposed legislation requiring online platforms to verify users’ ages, often through government ID or biometric data, purportedly to protect children from harmful content. IRS 990 filings are annual tax returns that nonprofits must submit to the IRS, providing financial and operational details that can reveal funding sources and activities. WHOIS lookups query databases to identify domain name registrants, while Wayback Machine archives preserve historical versions of websites, both useful for tracking organizational changes and influence campaigns.

References

Tags: #privacy, #surveillance, #policy, #investigation, #age-verification


Reddit discussion critiques the value of LLM benchmarking papers in academic conferences ⭐️ 7.0/10

A Reddit discussion on r/MachineLearning raised concerns about the proliferation of LLM benchmarking papers at major conferences like NeurIPS and ICLR, questioning their utility due to rapid model updates that make results obsolete by publication time. The discussion specifically highlighted how proprietary LLMs are updated monthly, rendering benchmarked models deprecated and sometimes unavailable when papers are published. This critique matters because it challenges the academic publishing ecosystem’s handling of fast-moving AI research, where traditional publication timelines clash with rapid technological advancement. It raises questions about research quality, the ‘publish or perish’ culture, and whether such papers provide meaningful scientific contributions versus serving as marketing for tech companies. The discussion notes that while the benchmark rankings in papers become stale quickly, the datasets created for these benchmarks can remain useful for practitioners to test their own models and catch regressions. However, a key limitation is that most benchmarks test models in isolation, while real-world production workloads involve multi-step chains where errors compound.

reddit · r/MachineLearning · casualcreak · Mar 13, 04:21

Background: NeurIPS (Conference on Neural Information Processing Systems) and ICLR (International Conference on Learning Representations) are two of the three primary high-impact machine learning conferences, alongside ICML. LLM benchmarking involves evaluating large language models on specific tasks or datasets to measure capabilities like accuracy, speed, or efficiency. The rapid iteration of proprietary LLMs by companies like OpenAI, Anthropic, and Google means models can be updated or replaced monthly, creating challenges for academic research that traditionally operates on longer publication cycles.

References

Discussion: Community sentiment is largely critical of current benchmarking practices, with users describing many papers as having low signal-to-noise ratio and serving primarily to fulfill publication requirements rather than advance science. Key viewpoints include: some users see these papers as essentially ‘product reviews’ rather than scientific contributions; others note that while paper rankings become obsolete, the datasets can be practically useful for testing; and there’s discussion about what constitutes a meaningful benchmark versus those testing ‘random irrelevant datasets.’

Tags: #LLM, #Benchmarking, #Academic Publishing, #Machine Learning, #Research Critique


Blind user seeks local LLM alternatives to Claude Code and Codex for accessibility tasks ⭐️ 7.0/10

A blind user shared how AI has transformed their accessibility to technology, enabling accurate image descriptions, document reading, and programming assistance, but expressed concern about the high costs of cloud services like Claude Code Pro and Codex Pro. They are now investigating whether local LLMs can provide comparable precision and production-ready capabilities for tasks such as building accessible accounting software. This highlights the critical role of AI in enhancing accessibility for people with disabilities, particularly in reducing barriers to information and technology. The search for affordable local alternatives reflects broader trends in democratizing AI tools, where cost-effective solutions could empower more users to leverage AI for personalized assistive technology without relying on expensive cloud services. Community recommendations include Qwen3.5 models for image and video description, with specific mentions of Qwen3.5-Coder-Next for programming tasks and Kimi K2.5 for image support. However, comments note that local LLMs currently cannot fully match the precision and performance of cloud services like Claude Code without significant hardware investment, such as multiple high-end GPUs or Apple Silicon Macs with large unified memory.

reddit · r/LocalLLaMA · Mrblindguardian · Mar 13, 17:54

Background: Local LLMs are large language models that run on personal hardware rather than cloud servers, offering potential cost savings and privacy benefits but often requiring substantial computational resources. Claude Code and Codex are proprietary AI models developed by Anthropic and OpenAI, respectively, known for high performance in coding and multimodal tasks but accessed via subscription-based cloud services. Accessibility applications of AI include image description, document reading, and programming assistance, which can significantly enhance independence for blind and visually impaired users.

References

Discussion: The community discussion presents a balanced view, with some users recommending specific local models like Qwen3.5 and Kimi K2.5 for image description and programming tasks, while others caution that local alternatives cannot fully rival cloud services like Claude Code without expensive hardware investments. There is agreement that cloud-based options like OpenRouter offer a more affordable middle ground, but hardware considerations, such as using Apple Silicon Macs for efficiency, are also highlighted.

Tags: #Accessibility, #Local LLMs, #AI Applications, #Cost-Benefit Analysis, #Assistive Technology


Apple may globally reduce App Store commissions from 30% to 20% ⭐️ 7.0/10

Apple introduced complex new App Store terms in the EU last week, with details suggesting the company may reduce its standard commission from 30% to 20%, and this change could potentially extend to global markets. This would mark Apple’s first reduction of the standard 30% commission rate for all developers. This potential global commission reduction could significantly impact app developers’ revenue and profitability, potentially reshaping the mobile app economy. It represents a major shift in Apple’s long-standing App Store policies that have faced increasing regulatory scrutiny and developer criticism worldwide. The new EU terms are described as extremely complex, with even Apple Design Award winner Ryan Jones stating that no developer friends could understand their specific meaning. Analysts suggest that if Apple implements a 20% commission only in the EU while maintaining 30% elsewhere, such differential pricing would be unreasonable, hinting at an imminent global adjustment.

telegram · zaihuapd · Mar 13, 01:49

Background: Apple’s App Store has traditionally charged developers a 30% commission on in-app purchases and subscriptions, with a reduced 15% rate for small businesses earning under $1 million annually. This commission structure has been a central point of contention in antitrust investigations and legal battles, particularly with the EU’s Digital Markets Act requiring Apple to allow alternative app stores and payment systems. The App Store serves as the exclusive distribution platform for iOS apps, giving Apple significant control over the iOS ecosystem.

Tags: #App Store, #Apple, #Tech Policy, #Mobile Development, #Digital Economy


Security research firm Innora AI Security Research published a technical analysis showing that in Alipay versions com.eg.android.AlipayGphone v10.8.26.7000 and v10.8.30.8000, the combination of DeepLink and WebView JSBridge can create an attack chain where external pages can call certain AlipayJSBridge APIs when users click links. The report states iOS has 18 vulnerable APIs compared to Android’s 13, including tradePay and getLocation interfaces that could expose sensitive network and location data. This vulnerability affects one of the world’s largest mobile payment platforms with over 1 billion users, potentially exposing sensitive financial and location data through what appears to be normal app functionality. The disclosure highlights ongoing security challenges in mobile app ecosystems where deep linking and JavaScript bridge technologies, while convenient for user experience, can create attack surfaces if not properly secured. The research team followed responsible disclosure procedures and submitted multiple reports to Ant Group, which responded on March 10, 2026 that the issues were “normal functionality.” The editorial note cautions that only location permission access and direct payment pop-ups were clearly demonstrated as vulnerabilities, suggesting potential exaggeration in the original report.

telegram · zaihuapd · Mar 13, 11:43

Background: Deep linking allows mobile apps to open specific content or functions directly from external links, commonly used for seamless user journeys between web and app environments. JSBridge (JavaScript Bridge) enables communication between web content loaded in WebView components and native app code, allowing JavaScript to call native functions. AlipayJSBridge is Alipay’s specific implementation that provides APIs for payment, location, and other services to web content within the app. When combined improperly, these technologies can allow external pages to access sensitive APIs that should be restricted to trusted domains only.

References

Tags: #Security, #Mobile Security, #Alipay, #JSBridge, #Privacy