AI × Crypto News Feed
Real-time AI-curated news from 30,628+ articles across 50+ sources. Sentiment analysis, importance scoring, and key takeaways — updated every 15 minutes.
Stablecoin yields will bring fresh money to US banks: White House's Witt
White House crypto chief Witt states that global demand for the US dollar is massive and that stablecoin yields will attract additional interest to the currency. The statement suggests stablecoins could drive fresh capital flows to US banks through yield-generating mechanisms.
BoE open to scrapping stablecoin limit idea after backlash
The Bank of England is considering abandoning its proposed stablecoin holding limits following significant industry backlash. Industry groups argue these restrictions would make the UK appear hostile to cryptocurrency and harm innovation in the sector.
Hyperliquid Looks Like Solana At $20 Last Cycle, Daniel Cheung Says
Daniel Cheung of Syncracy Capital compares Hyperliquid's HYPE token at $35 to Solana at $20 before its major rally, arguing the protocol has become crypto's main trading hub. He believes Hyperliquid could emerge as a category-defining financial trading platform that competes with traditional brokers like Robinhood.
Crypto ATM losses surge 33% in 2025 as AI superpowers scams: CertiK
Crypto ATM losses increased by 33% in 2025, with AI technology being used to enhance and superpower scamming operations. CertiK identifies crypto ATMs as the most accessible extraction method for scammers to convert stolen funds.
Meta reveals four Broadcom-built custom AI chips, claims some outperform commercial silicon
Meta has unveiled four custom AI chips developed in partnership with Broadcom, claiming some outperform existing commercial silicon solutions. This move represents Meta's strategic shift toward developing proprietary AI hardware to reduce dependence on third-party chip manufacturers.
BONK.fun team account hacked and used to launch wallet drainer on site
BONK.fun's team account was compromised by hackers who deployed a wallet drainer on the platform. The security breach further worsens BONK.fun's already declining market position and exposes critical vulnerabilities in decentralized platform security.
MediaTek patches bug enabling crypto seed theft in just 45 seconds
Ledger's security team discovered a critical vulnerability in MediaTek's secure boot chain that allows attackers to steal cryptocurrency seed phrases from Android devices in just 45 seconds. MediaTek has since patched the security flaw that could have compromised sensitive crypto wallet information on affected Android devices.
Na\"ive Exposure of Generative AI Capabilities Undermines Deepfake Detection
Researchers demonstrate that commercial AI chatbot interfaces inadvertently expose capabilities that allow adversaries to bypass deepfake detection systems using only policy-compliant prompts. The study reveals that current deepfake detectors fail against semantic-preserving image refinement techniques enabled by widely accessible AI systems.
Gradient Flow Drifting: Generative Modeling via Wasserstein Gradient Flows of KDE-Approximated Divergences
Researchers introduce Gradient Flow Drifting, a new mathematical framework for generative AI models that connects the Drifting Model to Wasserstein gradient flows of KL divergence under kernel density estimation. The framework includes a mixed-divergence strategy to avoid mode collapse and extends to Riemannian manifolds for improved semantic space applications.
Taking Shortcuts for Categorical VQA Using Super Neurons
Researchers introduce Super Neurons (SNs), a new method that probes raw activations in Vision Language Models to improve classification performance while achieving up to 5.10x speedup. Unlike Sparse Attention Vectors, SNs can identify discriminative neurons in shallow layers, enabling extreme early exiting from the first layer at the first generated token.
Execution Is the New Attack Surface: Survivability-Aware Agentic Crypto Trading with OpenClaw-Style Local Executors
Researchers propose Survivability-Aware Execution (SAE), a new security framework for AI-powered crypto trading systems that prevents execution-induced losses from compromised AI agents or malicious prompts. The system implements middleware protection between AI strategy engines and exchange executors, reducing maximum drawdown by 93.1% and attack success rates by 27.2% in testing.
Optimal Expert-Attention Allocation in Mixture-of-Experts: A Scalable Law for Dynamic Model Design
Researchers have developed a new scaling law for Mixture-of-Experts (MoE) models that optimizes compute allocation between expert and attention layers. The study extends the Chinchilla scaling law by introducing an optimal ratio formula that follows a power-law relationship with total compute and model sparsity.
DeliberationBench: A Normative Benchmark for the Influence of Large Language Models on Users' Views
Researchers developed DeliberationBench, a new benchmark to assess how large language models influence users' opinions on policy matters. A study of 4,088 participants discussing 65 policy proposals with six frontier LLMs found that these models have substantial influence that appears to align with democratically legitimate deliberative processes.
MCP-in-SoS: Risk assessment framework for open-source MCP servers
Researchers have developed a risk assessment framework for open-source Model Context Protocol (MCP) servers, revealing significant security vulnerabilities through static code analysis. The study found many MCP servers contain exploitable weaknesses that compromise confidentiality, integrity, and availability, highlighting the need for secure-by-design development as these tools become widely adopted for LLM agents.
Measuring and Eliminating Refusals in Military Large Language Models
Researchers developed the first benchmark dataset to measure refusal rates in military Large Language Models, finding that current LLMs refuse up to 98.2% of legitimate military queries due to safety behaviors. The study tested 34 models and demonstrated techniques to reduce refusals while maintaining military task performance.
How to Count AIs: Individuation and Liability for AI Agents
A legal research paper proposes the 'Algorithmic Corporation' (A-corp) framework to address the challenge of identifying and assigning liability for AI agents' actions as millions of autonomous AIs proliferate across the economy. The A-corp structure would create legally recognizable entities owned by humans but operated by AIs, enabling both accountability and legal recourse when AI agents cause harm.
Evaluating Adjective-Noun Compositionality in LLMs: Functional vs Representational Perspectives
A research study reveals that large language models develop strong internal compositional representations for adjective-noun combinations, but struggle to consistently translate these representations into successful task performance. The findings highlight a significant gap between what LLMs understand internally and their functional capabilities.
Training Language Models via Neural Cellular Automata
Researchers developed a method using neural cellular automata (NCA) to generate synthetic data for pre-training language models, achieving up to 6% improvement in downstream performance with only 164M synthetic tokens. This approach outperformed traditional pre-training on 1.6B natural language tokens while being more computationally efficient and transferring well to reasoning benchmarks.
The DMA Streaming Framework: Kernel-Level Buffer Orchestration for High-Performance AI Data Paths
Researchers have developed dmaplane, a Linux kernel module that provides buffer orchestration for AI workloads, addressing the gap between efficient data transport and proper buffer management. The system integrates RDMA, GPU memory management, and NUMA-aware allocation to optimize high-performance AI data paths at the kernel level.
The Curse and Blessing of Mean Bias in FP4-Quantized LLM Training
Researchers have identified a simple solution to training instability in 4-bit quantized large language models by removing mean bias, which causes the dominant spectral anisotropy. This mean-subtraction technique substantially improves FP4 training performance while being hardware-efficient, potentially enabling more accessible low-bit LLM training.
HTMuon: Improving Muon via Heavy-Tailed Spectral Correction
Researchers have developed HTMuon, an improved optimization algorithm for training large language models that builds upon the existing Muon optimizer. HTMuon addresses limitations in Muon's weight spectra by incorporating heavy-tailed spectral corrections, showing up to 0.98 perplexity reduction in LLaMA pretraining experiments.






