y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto
🤖All28,813🧠AI12,520⛓️Crypto10,456💎DeFi1,093🤖AI × Crypto515📰General4,229

AI × Crypto News Feed

Real-time AI-curated news from 28,818+ articles across 50+ sources. Sentiment analysis, importance scoring, and key takeaways — updated every 15 minutes.

28818 articles
DeFiBearishCoinDesk · Apr 137/10
💎

Attacker mints $1 billion Polkadot tokens on Ethereum, ends up stealing just $250,000

An attacker exploited a vulnerability in a cross-chain bridge contract by forging a state proof message to gain admin control over bridged Polkadot (DOT) tokens on Ethereum. Despite minting $1 billion in fake tokens, the attacker only managed to extract approximately $250,000 in value before liquidity constraints and market impact limited further sales.

Attacker mints $1 billion Polkadot tokens on Ethereum, ends up stealing just $250,000
$ETH$DOT
CryptoNeutralDaily Hodl · Apr 136/10
⛓️

How Can Crypto Move Beyond the ‘Wild West’ Image in 2026

The article addresses cryptocurrency's persistent trust deficit and argues that by 2026, the industry must shed its 'Wild West' reputation to achieve sustainable growth in an expanding multi-billion-dollar market. The piece emphasizes that while crypto has garnered significant attention, legitimacy and institutional confidence remain critical barriers to mainstream adoption and market maturation.

How Can Crypto Move Beyond the ‘Wild West’ Image in 2026
CryptoBearishNewsBTC · Apr 137/10
⛓️

Bitcoin Bulls Must Hold This Level Or Price Could Crash To $65,000 Again

Bitcoin faces a critical test at the $70,500 support level, which crypto analysts identify as crucial for maintaining the current uptrend. If this level breaks, the price could cascade downward toward the unfilled CME gap below $67,000 and potentially reach $65,000 or lower as whales hunt for liquidity.

Bitcoin Bulls Must Hold This Level Or Price Could Crash To $65,000 Again
$BTC$ETH
CryptoBearishThe Block · Apr 137/10
⛓️

Bank of Korea calls for ‘circuit breaker’ in local crypto market, citing Bithumb incident

South Korea's central bank is advocating for a 'circuit breaker' mechanism in the domestic cryptocurrency market following Bithumb's accidental transfer of 620,000 BTC, highlighting systemic risks in exchange operations. The BOK's call for stricter internal controls addresses operational vulnerabilities that could threaten market stability and investor protection.

Bank of Korea calls for ‘circuit breaker’ in local crypto market, citing Bithumb incident
$BTC
AIBullishOpenAI News · Apr 137/10
🧠

Enterprises power agentic workflows in Cloudflare Agent Cloud with OpenAI

Cloudflare has integrated OpenAI's GPT-5.4 and Codex models into its Agent Cloud platform, enabling enterprises to build and deploy AI agents for production workloads. This integration combines Cloudflare's infrastructure and security capabilities with OpenAI's advanced language models to streamline agentic AI development at enterprise scale.

🏢 OpenAI🧠 GPT-5
CryptoBearishCoinTelegraph · Apr 137/10
⛓️

Musician loses $420K Bitcoin 'retirement fund' via fake Ledger app

A musician lost approximately $420,000 worth of Bitcoin after downloading a counterfeit Ledger hardware wallet application. Blockchain analyst ZachXBT confirmed the stolen 5.9 BTC was transferred to KuCoin deposit addresses, highlighting the ongoing security risks users face from sophisticated phishing schemes targeting cryptocurrency holders.

Musician loses $420K Bitcoin 'retirement fund' via fake Ledger app
$BTC
CryptoBullishCoinDesk · Apr 137/10
⛓️

Strategy signals another bitcoin buy as company needs just 2% annual BTC growth to cover dividends

A publicly traded company purchased nearly three times the amount of bitcoin that miners produced in March, demonstrating aggressive accumulation despite current underwater positions. The company's dividend strategy requires only 2% annual BTC growth, suggesting confidence in bitcoin's long-term trajectory and positioning for sustained shareholder returns.

Strategy signals another bitcoin buy as company needs just 2% annual BTC growth to cover dividends
$BTC
AIBullisharXiv – CS AI · Apr 137/10
🧠

EquiformerV3: Scaling Efficient, Expressive, and General SE(3)-Equivariant Graph Attention Transformers

EquiformerV3, an advanced SE(3)-equivariant graph neural network, achieves significant improvements in efficiency, expressivity, and generality for 3D atomistic modeling. The new version delivers 1.75x speedup, introduces architectural innovations like SwiGLU-S² activations and smooth-cutoff attention, and achieves state-of-the-art results on major molecular modeling benchmarks including OC20 and OMat24.

$SE
AIBullisharXiv – CS AI · Apr 137/10
🧠

EigentSearch-Q+: Enhancing Deep Research Agents with Structured Reasoning Tools

Researchers introduce Q+, a structured reasoning toolkit that enhances AI research agents by making web search more deliberate and organized. Integrated into Eigent's browser agent, Q+ demonstrates consistent benchmark improvements of 0.6 to 3.8 percentage points across multiple deep-research tasks, suggesting meaningful progress in autonomous AI agent reliability.

🏢 Anthropic🧠 GPT-4🧠 GPT-5
AINeutralarXiv – CS AI · Apr 137/10
🧠

Medical Reasoning with Large Language Models: A Survey and MR-Bench

Researchers present a comprehensive survey of medical reasoning in large language models, introducing MR-Bench, a clinical benchmark derived from real hospital data. The study reveals a significant performance gap between exam-style tasks and authentic clinical decision-making, highlighting that robust medical reasoning requires more than factual recall in safety-critical healthcare applications.

AIBullisharXiv – CS AI · Apr 137/10
🧠

Bayesian Social Deduction with Graph-Informed Language Models

Researchers introduce a hybrid framework combining probabilistic models with large language models to improve social reasoning in AI agents, achieving a 67% win rate against human players in the game Avalon—a breakthrough in AI's ability to infer beliefs and intentions from incomplete information.

AIBullisharXiv – CS AI · Apr 137/10
🧠

SkillFactory: Self-Distillation For Learning Cognitive Behaviors

SkillFactory is a novel fine-tuning method that enables language models to learn cognitive behaviors like verification and backtracking without requiring distillation from stronger models. The approach uses self-rearranged training samples during supervised fine-tuning to prime models for subsequent reinforcement learning, resulting in better generalization and robustness.

AIBullisharXiv – CS AI · Apr 137/10
🧠

The Two-Stage Decision-Sampling Hypothesis: Understanding the Emergence of Self-Reflection in RL-Trained LLMs

Researchers introduce the Two-Stage Decision-Sampling Hypothesis to explain how reinforcement learning enables self-reflection capabilities in large language models, demonstrating that RL's superior performance stems from improved decision-making rather than generation quality. The theory shows that reward gradients distribute asymmetrically across policy components, explaining why RL succeeds where supervised fine-tuning fails.

AIBearisharXiv – CS AI · Apr 137/10
🧠

On the Limits of Layer Pruning for Generative Reasoning in Large Language Models

Research demonstrates that layer pruning—a compression technique for large language models—effectively reduces model size while maintaining classification performance, but critically fails to preserve generative reasoning capabilities like arithmetic and code generation. Even with extensive post-training on 400B tokens, models cannot recover lost reasoning abilities, revealing fundamental limitations in current compression approaches.

AIBullisharXiv – CS AI · Apr 137/10
🧠

Listener-Rewarded Thinking in VLMs for Image Preferences

Researchers introduce a listener-augmented reinforcement learning framework for training vision-language models to better align with human visual preferences. By using an independent frozen model to evaluate and validate reasoning chains, the approach achieves 67.4% accuracy on ImageReward benchmarks and demonstrates significant improvements in out-of-distribution generalization.

🏢 Hugging Face
AINeutralarXiv – CS AI · Apr 137/10
🧠

The Hot Mess of AI: How Does Misalignment Scale With Model Intelligence and Task Complexity?

Researchers find that as AI models scale up and tackle more complex tasks, their failures become increasingly incoherent and unpredictable rather than systematically misaligned. Using error-variance decomposition, the study shows that longer reasoning chains correlate with more random, nonsensical failures, suggesting future advanced AI systems may cause unpredictable accidents rather than exhibit consistent goal misalignment.

AIBullisharXiv – CS AI · Apr 137/10
🧠

Revitalizing Black-Box Interpretability: Actionable Interpretability for LLMs via Proxy Models

Researchers propose a cost-effective proxy model framework that uses smaller, efficient models to approximate the interpretability explanations of expensive Large Language Models (LLMs), achieving over 90% fidelity at just 11% of computational cost. The framework includes verification mechanisms and demonstrates practical applications in prompt compression and data cleaning, making interpretability tools viable for real-world LLM development.

AIBullisharXiv – CS AI · Apr 137/10
🧠

Webscale-RL: Automated Data Pipeline for Scaling RL Data to Pretraining Levels

Researchers introduced Webscale-RL, a data pipeline that converts large-scale pre-training documents into 1.2 million diverse question-answer pairs for reinforcement learning training. The approach enables RL models to achieve pre-training-level performance with up to 100x fewer tokens, addressing a critical bottleneck in scaling RL data and potentially advancing more efficient language model development.

AIBullisharXiv – CS AI · Apr 137/10
🧠

Unmasking Puppeteers: Leveraging Biometric Leakage to Disarm Impersonation in AI-based Videoconferencing

Researchers have developed a biometric leakage defense system that detects impersonation attacks in AI-based videoconferencing by analyzing pose-expression latents rather than reconstructed video. The method uses a contrastive encoder to isolate persistent identity cues, successfully flagging identity swaps in real-time across multiple talking-head generation models.

AIBullisharXiv – CS AI · Apr 137/10
🧠

Neurons Speak in Ranges: Breaking Free from Discrete Neuronal Attribution

Researchers introduce NeuronLens, a framework that interprets neural networks by analyzing activation ranges rather than individual neurons, addressing the widespread polysemanticity problem in large language models. The range-based approach enables more precise concept manipulation while minimizing unintended degradation to model performance.

AIBullisharXiv – CS AI · Apr 137/10
🧠

Commanding Humanoid by Free-form Language: A Large Language Action Model with Unified Motion Vocabulary

Researchers introduce Humanoid-LLA, a Large Language Action Model enabling humanoid robots to execute complex physical tasks from natural language commands. The system combines a unified motion vocabulary, physics-aware controller, and reinforcement learning to achieve both language understanding and real-world robot control, demonstrating improved performance on Unitree G1 and Booster T1 humanoids.

AINeutralarXiv – CS AI · Apr 137/10
🧠

When Identity Skews Debate: Anonymization for Bias-Reduced Multi-Agent Reasoning

Researchers present a framework to identify and mitigate identity bias in multi-agent debate systems where LLMs exchange reasoning. The study reveals that agents suffer from sycophancy (adopting peer views) and self-bias (ignoring peers), undermining debate reliability, and proposes response anonymization as a solution to force agents to evaluate arguments on merit rather than source identity.

AIBearisharXiv – CS AI · Apr 137/10
🧠

XFED: Non-Collusive Model Poisoning Attack Against Byzantine-Robust Federated Classifiers

Researchers have developed XFED, a novel model poisoning attack that compromises federated learning systems without requiring attackers to communicate or coordinate with each other. The attack successfully bypasses eight state-of-the-art defenses, revealing fundamental security vulnerabilities in FL deployments that were previously underestimated.

AINeutralarXiv – CS AI · Apr 137/10
🧠

Many-Tier Instruction Hierarchy in LLM Agents

Researchers propose Many-Tier Instruction Hierarchy (ManyIH), a new framework for resolving conflicts among instructions given to large language model agents from multiple sources with varying authority levels. Current models achieve only ~40% accuracy when navigating up to 12 conflicting instruction tiers, revealing a critical safety gap in agentic AI systems.

AIBullisharXiv – CS AI · Apr 137/10
🧠

Watt Counts: Energy-Aware Benchmark for Sustainable LLM Inference on Heterogeneous GPU Architectures

Researchers introduced Watt Counts, an open-access dataset containing over 5,000 energy consumption experiments across 50 LLMs and 10 NVIDIA GPUs, revealing that optimal hardware choices for energy-efficient inference vary significantly by model and deployment scenario. The study demonstrates practitioners can reduce energy consumption by up to 70% in server deployments with minimal performance impact, addressing a critical gap in energy-aware LLM deployment guidance.

🏢 Nvidia
← PrevPage 122 of 1153Next →
Filters
Sentiment
Importance
Sort
Stay Updated
Everything combined