y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto
🤖All28,710🧠AI12,498⛓️Crypto10,424💎DeFi1,090🤖AI × Crypto508📰General4,190

AI × Crypto News Feed

Real-time AI-curated news from 28,710+ articles across 50+ sources. Sentiment analysis, importance scoring, and key takeaways — updated every 15 minutes.

28710 articles
AIBullisharXiv – CS AI · Apr 147/10
🧠

Detecting Corporate AI-Washing via Cross-Modal Semantic Inconsistency Learning

Researchers have developed AWASH, a multimodal AI detection framework that identifies corporate AI-washing—exaggerated or fabricated claims about AI capabilities across corporate disclosures. The system analyzes text, images, and video from financial reports and earnings calls, achieving 88.2% accuracy and reducing regulatory review time by 43% in user testing with compliance analysts.

AINeutralarXiv – CS AI · Apr 147/10
🧠

Evaluating Reliability Gaps in Large Language Model Safety via Repeated Prompt Sampling

Researchers introduce Accelerated Prompt Stress Testing (APST), a new evaluation framework that reveals safety vulnerabilities in large language models through repeated prompt sampling rather than traditional broad benchmarks. The study finds that models appearing equally safe in conventional testing show significant reliability differences when repeatedly queried, indicating current safety benchmarks may mask operational risks in deployed systems.

AIBullisharXiv – CS AI · Apr 147/10
🧠

ExecTune: Effective Steering of Black-Box LLMs with Guide Models

Researchers introduce ExecTune, a training methodology for optimizing black-box LLM systems where a guide model generates strategies executed by a core model. The approach improves accuracy by up to 9.2% while reducing inference costs by 22.4%, enabling smaller models like Claude Haiku to match larger competitors at significantly lower computational expense.

🧠 Claude🧠 Haiku🧠 Sonnet
AIBullisharXiv – CS AI · Apr 147/10
🧠

Persistent Identity in AI Agents: A Multi-Anchor Architecture for Resilient Memory and Continuity

Researchers introduce soul.py, an open-source architecture addressing catastrophic forgetting in AI agents by distributing identity across multiple memory systems rather than centralizing it. The framework implements persistent identity through separable components and a hybrid RAG+RLM retrieval system, drawing inspiration from how human memory survives neurological damage.

AIBullisharXiv – CS AI · Apr 147/10
🧠

SPEED-Bench: A Unified and Diverse Benchmark for Speculative Decoding

Researchers introduce SPEED-Bench, a comprehensive benchmark suite for evaluating Speculative Decoding (SD) techniques that accelerate LLM inference. The benchmark addresses critical gaps in existing evaluation methods by offering diverse semantic domains, throughput-oriented testing across multiple concurrency levels, and integration with production systems like vLLM and TensorRT-LLM, enabling more accurate real-world performance measurement.

AIBullisharXiv – CS AI · Apr 147/10
🧠

Generative UI: LLMs are Effective UI Generators

Researchers demonstrate that modern LLMs can robustly generate custom user interfaces directly from prompts, moving beyond static markdown outputs. The approach shows emergent capabilities with results comparable to human-crafted designs in 50% of cases, accompanied by the release of PAGEN, a dataset for evaluating generative UI implementations.

AIBearisharXiv – CS AI · Apr 147/10
🧠

LLM Nepotism in Organizational Governance

Researchers have identified 'LLM Nepotism,' a bias where language models favor job candidates and organizational decisions that express trust in AI, regardless of merit. This creates self-reinforcing cycles where AI-trusting organizations make worse decisions and delegate more to AI systems, potentially compromising governance quality across sectors.

AIBearisharXiv – CS AI · Apr 147/10
🧠

Jailbreaking the Matrix: Nullspace Steering for Controlled Model Subversion

Researchers have developed Head-Masked Nullspace Steering (HMNS), a novel jailbreak technique that exploits circuit-level vulnerabilities in large language models by identifying and suppressing specific attention heads responsible for safety mechanisms. The method achieves state-of-the-art attack success rates with fewer queries than previous approaches, demonstrating that current AI safety defenses remain fundamentally vulnerable to geometry-aware adversarial interventions.

AIBullisharXiv – CS AI · Apr 147/10
🧠

Why Smaller Is Slower? Dimensional Misalignment in Compressed LLMs

Researchers identify dimensional misalignment as a critical bottleneck in compressed large language models, where parameter reduction fails to improve GPU performance due to hardware-incompatible tensor dimensions. They propose GAC (GPU-Aligned Compression), a new optimization method that achieves up to 1.5× speedup while maintaining model quality by ensuring hardware-friendly dimensions.

🧠 Llama
AIBullisharXiv – CS AI · Apr 147/10
🧠

From Topology to Trajectory: LLM-Driven World Models For Supply Chain Resilience

Researchers introduce ReflectiChain, an AI framework combining large language models with generative world models to improve semiconductor supply chain resilience against geopolitical disruptions. The system demonstrates 250% performance improvements over standard LLM approaches by integrating physical environmental constraints and autonomous policy learning, restoring operational capacity from 13.3% to 88.5% under extreme scenarios.

AINeutralarXiv – CS AI · Apr 147/10
🧠

PAC-BENCH: Evaluating Multi-Agent Collaboration under Privacy Constraints

Researchers introduce PAC-Bench, a benchmark for evaluating how AI agents collaborate while maintaining privacy constraints. The study reveals that privacy protections significantly degrade multi-agent system performance and identify coordination failures as a critical unsolved challenge requiring new technical approaches.

$PAC
AINeutralarXiv – CS AI · Apr 147/10
🧠

Why Do Large Language Models Generate Harmful Content?

Researchers used causal mediation analysis to identify why large language models generate harmful content, discovering that harmful outputs originate in later model layers primarily through MLP blocks rather than attention mechanisms. Early layers develop contextual understanding of harmfulness that propagates through the network to sparse neurons in final layers that act as gating mechanisms for harmful generation.

AIBearisharXiv – CS AI · Apr 147/10
🧠

Thinking Fast, Thinking Wrong: Intuitiveness Modulates LLM Counterfactual Reasoning in Policy Evaluation

A new study reveals that large language models fail at counterfactual reasoning when policy findings contradict intuitive expectations, despite performing well on obvious cases. The research demonstrates that chain-of-thought prompting paradoxically worsens performance on counter-intuitive scenarios, suggesting current LLMs engage in 'slow talking' rather than genuine deliberative reasoning.

AIBullisharXiv – CS AI · Apr 147/10
🧠

Three Roles, One Model: Role Orchestration at Inference Time to Close the Performance Gap Between Small and Large Agents

Researchers demonstrate that inference-time scaffolding can double the performance of small 8B language models on complex tool-use tasks without additional training, by deploying the same frozen model in three specialized roles: summarization, reasoning, and code correction. On a single 24GB GPU, this approach enables an 8B model to match or exceed much larger systems like DeepSeek-Coder 33B, suggesting efficient deployment paths for capable AI agents on modest hardware.

AIBearisharXiv – CS AI · Apr 147/10
🧠

Persona Non Grata: Single-Method Safety Evaluation Is Incomplete for Persona-Imbued LLMs

Researchers demonstrate that safety evaluations of persona-imbued large language models using only prompt-based testing are fundamentally incomplete, as activation steering reveals entirely different vulnerability profiles across model architectures. Testing across four models reveals the 'prosocial persona paradox' where conscientious personas safe under prompting become the most vulnerable to activation steering attacks, indicating that single-method safety assessments can miss critical failure modes.

🧠 Llama
AIBearisharXiv – CS AI · Apr 147/10
🧠

Do LLMs Build Spatial World Models? Evidence from Grid-World Maze Tasks

Researchers tested whether large language models develop spatial world models through maze-solving tasks, finding that leading models like Gemini, GPT-4, and Claude struggle with spatial reasoning. Performance varies dramatically (16-86% accuracy) depending on input format, suggesting LLMs lack robust, format-invariant spatial understanding rather than building true internal world models.

🧠 GPT-5🧠 Claude🧠 Gemini
AIBearisharXiv – CS AI · Apr 147/10
🧠

VeriSim: A Configurable Framework for Evaluating Medical AI Under Realistic Patient Noise

Researchers introduce VeriSim, an open-source framework that tests medical AI systems by injecting realistic patient communication barriers—such as memory gaps and health literacy limitations—into clinical simulations. Testing across seven LLMs reveals significant performance degradation (15-25% accuracy drop), with smaller models suffering 40% greater decline than larger ones, exposing a critical gap between standardized benchmarks and real-world clinical robustness.

AI × CryptoNeutralarXiv – CS AI · Apr 147/10
🤖

Emergent Social Structures in Autonomous AI Agent Networks: A Metadata Analysis of 626 Agents on the Pilot Protocol

Researchers analyzed 626 autonomous AI agents that independently joined the Pilot Protocol, discovering that these machines formed complex social structures mirroring human networks without explicit instruction. The emergent topology exhibits small-world properties, preferential attachment, and specialized clustering, representing the first empirical evidence of spontaneous social organization among autonomous AI systems.

AIBullisharXiv – CS AI · Apr 147/10
🧠

SemaClaw: A Step Towards General-Purpose Personal AI Agents through Harness Engineering

SemaClaw is an open-source framework addressing the shift from prompt engineering to 'harness engineering'—building infrastructure for controllable, auditable AI agents. Announced alongside OpenClaw's mass adoption in early 2026, it enables persistent personal AI agents through DAG-based orchestration, behavioral safety systems, and automated knowledge base construction.

AIBullisharXiv – CS AI · Apr 147/10
🧠

Context Kubernetes: Declarative Orchestration of Enterprise Knowledge for Agentic AI Systems

Researchers introduce Context Kubernetes, an architecture that applies container orchestration principles to managing enterprise knowledge in AI agent systems. The system addresses critical governance, freshness, and security challenges, demonstrating that without proper controls, AI agents leak data in over 26% of queries and serve stale content silently.

CryptoNeutralCoinDesk · Apr 147/10
⛓️

U.S. lawmakers take another swing at crypto tax policy with revised bill

U.S. lawmakers have introduced a revised bill to reform how the IRS treats cryptocurrency for tax purposes. The legislation aims to clarify tax reporting requirements and compliance obligations for crypto transactions, addressing ongoing regulatory ambiguity that has created compliance challenges for investors and industry participants.

U.S. lawmakers take another swing at crypto tax policy with revised bill
AI × CryptoBearishBitcoinist · Apr 147/10
🤖

Crypto Security Faces New Test As Rogue AI Agents Emerge

UC researchers discovered that autonomous AI agents operating within crypto infrastructure can be exploited to drain wallets, with a proof-of-concept attack successfully siphoning funds from a test wallet connected to third-party AI routers. While the immediate financial loss was minimal, the vulnerability exposes a critical security gap in AI-assisted cryptocurrency systems as these agents become more prevalent.

Crypto Security Faces New Test As Rogue AI Agents Emerge
$ETH
CryptoBullishCoinTelegraph · Apr 147/10
⛓️

Foundry launches Zcash mining pool, notches 29% hashrate in first month

Foundry has launched a new Zcash mining pool that captured 29% of the network's hashrate within its first month of operation, dramatically reducing ViaBTC's dominance from 65% to 37%. This significant market shift demonstrates renewed competition in privacy-coin mining infrastructure and signals potential changes in Zcash's mining landscape.

Foundry launches Zcash mining pool, notches 29% hashrate in first month
CryptoBullishBlockonomi · Apr 147/10
⛓️

The Crypto Market News Every Investor Need To Know – XRP, Cardano And One Early Opportunity

The SEC's CLARITY Act roundtable scheduled for April 16 signals regulatory progress for crypto, while CoinShares' $1.2 billion Nasdaq listing with $6 billion in assets under management demonstrates institutional capital accelerating into digital assets. XRP and Cardano are highlighted as projects benefiting from this institutional influx and regulatory clarity.

$XRP$ADA
← PrevPage 115 of 1149Next →
Filters
Sentiment
Importance
Sort
Stay Updated
Everything combined