y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto
🤖All30,624🧠AI12,980⛓️Crypto11,116💎DeFi1,144🤖AI × Crypto566📰General4,818

AI × Crypto News Feed

Real-time AI-curated news from 30,628+ articles across 50+ sources. Sentiment analysis, importance scoring, and key takeaways — updated every 15 minutes.

30628 articles
CryptoBullishCoinTelegraph · Mar 127/10
⛓️

Stablecoin yields will bring fresh money to US banks: White House's Witt

White House crypto chief Witt states that global demand for the US dollar is massive and that stablecoin yields will attract additional interest to the currency. The statement suggests stablecoins could drive fresh capital flows to US banks through yield-generating mechanisms.

Stablecoin yields will bring fresh money to US banks: White House's Witt
CryptoBullishCoinTelegraph · Mar 127/10
⛓️

BoE open to scrapping stablecoin limit idea after backlash

The Bank of England is considering abandoning its proposed stablecoin holding limits following significant industry backlash. Industry groups argue these restrictions would make the UK appear hostile to cryptocurrency and harm innovation in the sector.

BoE open to scrapping stablecoin limit idea after backlash
DeFiBullishNewsBTC · Mar 127/10
💎

Hyperliquid Looks Like Solana At $20 Last Cycle, Daniel Cheung Says

Daniel Cheung of Syncracy Capital compares Hyperliquid's HYPE token at $35 to Solana at $20 before its major rally, arguing the protocol has become crypto's main trading hub. He believes Hyperliquid could emerge as a category-defining financial trading platform that competes with traditional brokers like Robinhood.

Hyperliquid Looks Like Solana At $20 Last Cycle, Daniel Cheung Says
$SOL
AI × CryptoBearishCoinTelegraph · Mar 127/10
🤖

Crypto ATM losses surge 33% in 2025 as AI superpowers scams: CertiK

Crypto ATM losses increased by 33% in 2025, with AI technology being used to enhance and superpower scamming operations. CertiK identifies crypto ATMs as the most accessible extraction method for scammers to convert stolen funds.

Crypto ATM losses surge 33% in 2025 as AI superpowers scams: CertiK
DeFiBearishCrypto Briefing · Mar 127/10
💎

BONK.fun team account hacked and used to launch wallet drainer on site

BONK.fun's team account was compromised by hackers who deployed a wallet drainer on the platform. The security breach further worsens BONK.fun's already declining market position and exposes critical vulnerabilities in decentralized platform security.

BONK.fun team account hacked and used to launch wallet drainer on site
CryptoBearishCoinTelegraph · Mar 127/10
⛓️

MediaTek patches bug enabling crypto seed theft in just 45 seconds

Ledger's security team discovered a critical vulnerability in MediaTek's secure boot chain that allows attackers to steal cryptocurrency seed phrases from Android devices in just 45 seconds. MediaTek has since patched the security flaw that could have compromised sensitive crypto wallet information on affected Android devices.

MediaTek patches bug enabling crypto seed theft in just 45 seconds
AIBearisharXiv – CS AI · Mar 127/10
🧠

Na\"ive Exposure of Generative AI Capabilities Undermines Deepfake Detection

Researchers demonstrate that commercial AI chatbot interfaces inadvertently expose capabilities that allow adversaries to bypass deepfake detection systems using only policy-compliant prompts. The study reveals that current deepfake detectors fail against semantic-preserving image refinement techniques enabled by widely accessible AI systems.

AIBullisharXiv – CS AI · Mar 127/10
🧠

Gradient Flow Drifting: Generative Modeling via Wasserstein Gradient Flows of KDE-Approximated Divergences

Researchers introduce Gradient Flow Drifting, a new mathematical framework for generative AI models that connects the Drifting Model to Wasserstein gradient flows of KL divergence under kernel density estimation. The framework includes a mixed-divergence strategy to avoid mode collapse and extends to Riemannian manifolds for improved semantic space applications.

$KL
AIBullisharXiv – CS AI · Mar 127/10
🧠

Taking Shortcuts for Categorical VQA Using Super Neurons

Researchers introduce Super Neurons (SNs), a new method that probes raw activations in Vision Language Models to improve classification performance while achieving up to 5.10x speedup. Unlike Sparse Attention Vectors, SNs can identify discriminative neurons in shallow layers, enabling extreme early exiting from the first layer at the first generated token.

AI × CryptoNeutralarXiv – CS AI · Mar 127/10
🤖

Execution Is the New Attack Surface: Survivability-Aware Agentic Crypto Trading with OpenClaw-Style Local Executors

Researchers propose Survivability-Aware Execution (SAE), a new security framework for AI-powered crypto trading systems that prevents execution-induced losses from compromised AI agents or malicious prompts. The system implements middleware protection between AI strategy engines and exchange executors, reducing maximum drawdown by 93.1% and attack success rates by 27.2% in testing.

AIBearisharXiv – CS AI · Mar 127/10
🧠

MCP-in-SoS: Risk assessment framework for open-source MCP servers

Researchers have developed a risk assessment framework for open-source Model Context Protocol (MCP) servers, revealing significant security vulnerabilities through static code analysis. The study found many MCP servers contain exploitable weaknesses that compromise confidentiality, integrity, and availability, highlighting the need for secure-by-design development as these tools become widely adopted for LLM agents.

AINeutralarXiv – CS AI · Mar 127/10
🧠

Measuring and Eliminating Refusals in Military Large Language Models

Researchers developed the first benchmark dataset to measure refusal rates in military Large Language Models, finding that current LLMs refuse up to 98.2% of legitimate military queries due to safety behaviors. The study tested 34 models and demonstrated techniques to reduce refusals while maintaining military task performance.

AINeutralarXiv – CS AI · Mar 127/10
🧠

How to Count AIs: Individuation and Liability for AI Agents

A legal research paper proposes the 'Algorithmic Corporation' (A-corp) framework to address the challenge of identifying and assigning liability for AI agents' actions as millions of autonomous AIs proliferate across the economy. The A-corp structure would create legally recognizable entities owned by humans but operated by AIs, enabling both accountability and legal recourse when AI agents cause harm.

AINeutralarXiv – CS AI · Mar 127/10
🧠

Evaluating Adjective-Noun Compositionality in LLMs: Functional vs Representational Perspectives

A research study reveals that large language models develop strong internal compositional representations for adjective-noun combinations, but struggle to consistently translate these representations into successful task performance. The findings highlight a significant gap between what LLMs understand internally and their functional capabilities.

AIBullisharXiv – CS AI · Mar 127/10
🧠

Training Language Models via Neural Cellular Automata

Researchers developed a method using neural cellular automata (NCA) to generate synthetic data for pre-training language models, achieving up to 6% improvement in downstream performance with only 164M synthetic tokens. This approach outperformed traditional pre-training on 1.6B natural language tokens while being more computationally efficient and transferring well to reasoning benchmarks.

AIBullisharXiv – CS AI · Mar 127/10
🧠

The Curse and Blessing of Mean Bias in FP4-Quantized LLM Training

Researchers have identified a simple solution to training instability in 4-bit quantized large language models by removing mean bias, which causes the dominant spectral anisotropy. This mean-subtraction technique substantially improves FP4 training performance while being hardware-efficient, potentially enabling more accessible low-bit LLM training.

AIBullisharXiv – CS AI · Mar 127/10
🧠

HTMuon: Improving Muon via Heavy-Tailed Spectral Correction

Researchers have developed HTMuon, an improved optimization algorithm for training large language models that builds upon the existing Muon optimizer. HTMuon addresses limitations in Muon's weight spectra by incorporating heavy-tailed spectral corrections, showing up to 0.98 perplexity reduction in LLaMA pretraining experiments.

🏢 Perplexity
← PrevPage 233 of 1226Next →
Filters
Sentiment
Importance
Sort
Stay Updated
Everything combined