y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto
🤖All31,630🧠AI13,480⛓️Crypto11,461💎DeFi1,179🤖AI × Crypto583📰General4,927

AI × Crypto News Feed

Real-time AI-curated news from 31,631+ articles across 50+ sources. Sentiment analysis, importance scoring, and key takeaways — updated every 15 minutes.

31631 articles
AINeutralarXiv – CS AI · Mar 47/104
🧠

SorryDB: Can AI Provers Complete Real-World Lean Theorems?

Researchers have introduced SorryDB, a dynamic benchmark for evaluating AI systems' ability to prove mathematical theorems using the Lean proof assistant. The benchmark draws from 78 real-world formalization projects and addresses limitations of static benchmarks by providing continuously updated tasks that better reflect community needs.

AIBullisharXiv – CS AI · Mar 47/104
🧠

Retrieval-Augmented Robots via Retrieve-Reason-Act

Researchers introduce Retrieval-Augmented Robotics (RAR), a new paradigm enabling robots to actively retrieve and use external visual documentation to execute complex tasks. The system uses a Retrieve-Reason-Act loop where robots search unstructured visual manuals, align 2D diagrams with 3D objects, and synthesize executable plans for assembly tasks.

AIBullisharXiv – CS AI · Mar 46/104
🧠

EvoSkill: Automated Skill Discovery for Multi-Agent Systems

Researchers have developed EvoSkill, an automated framework that enables AI agents to discover and refine domain-specific skills through iterative failure analysis. The system demonstrated significant performance improvements on specialized tasks, with accuracy gains of 7.3% on financial data analysis and 12.1% on search-augmented QA, while showing transferable capabilities across different domains.

AIBullisharXiv – CS AI · Mar 46/102
🧠

Rethinking Code Similarity for Automated Algorithm Design with LLMs

Researchers introduce BehaveSim, a new method to measure algorithmic similarity by analyzing problem-solving behavior rather than code syntax. The approach enhances AI-driven algorithm design frameworks and enables systematic analysis of AI-generated algorithms through behavioral clustering.

AIBullisharXiv – CS AI · Mar 46/104
🧠

Agentified Assessment of Logical Reasoning Agents

Researchers present a new framework for evaluating logical reasoning AI agents using an "assessor agent" that can issue tasks, enforce execution limits, and record structured failure types. Their auto-formalization agent achieved 86.70% accuracy on logical reasoning tasks, outperforming traditional chain-of-thought approaches by nearly 13 percentage points.

AIBullisharXiv – CS AI · Mar 47/103
🧠

Guideline-Grounded Evidence Accumulation for High-Stakes Agent Verification

Researchers developed GLEAN, a new AI verification framework that improves reliability of LLM-powered agents in high-stakes decisions like clinical diagnosis. The system uses expert guidelines and Bayesian logistic regression to better verify AI agent decisions, showing 12% improvement in accuracy and 50% better calibration in medical diagnosis tests.

AINeutralarXiv – CS AI · Mar 47/103
🧠

Retrievit: In-context Retrieval Capabilities of Transformers, State Space Models, and Hybrid Architectures

Research compares Transformers, State Space Models (SSMs), and hybrid architectures for in-context retrieval tasks, finding hybrid models excel at information-dense retrieval while Transformers remain superior for position-based tasks. SSM-based models develop unique locality-aware embeddings that create interpretable positional structures, explaining their specific strengths and limitations.

AIBullisharXiv – CS AI · Mar 46/102
🧠

SAE as a Crystal Ball: Interpretable Features Predict Cross-domain Transferability of LLMs without Training

Researchers developed SAE-based Transferability Score (STS), a new metric using sparse autoencoders to predict how well fine-tuned large language models will perform across different domains without requiring actual training. The method achieves correlation coefficients above 0.7 with actual performance changes and provides interpretable insights into model adaptation.

AINeutralarXiv – CS AI · Mar 46/105
🧠

Architecting Trust in Artificial Epistemic Agents

Researchers propose a framework for developing trustworthy AI agents that function as epistemic entities, capable of pursuing knowledge goals and shaping information environments. The paper argues that as AI models increasingly replace traditional search methods and provide specialized advice, their calibration to human epistemic norms becomes critical to prevent cognitive deskilling and epistemic drift.

AIBullisharXiv – CS AI · Mar 46/102
🧠

OrchMAS: Orchestrated Reasoning with Multi Collaborative Heterogeneous Scientific Expert Structured Agents

Researchers have developed OrchMAS, a new multi-agent AI framework that uses specialized expert agents and dynamic orchestration to improve reasoning in scientific domains. The system addresses limitations of existing multi-agent frameworks by enabling flexible role allocation, prompt refinement, and heterogeneous model integration for complex scientific tasks.

AIBullisharXiv – CS AI · Mar 46/104
🧠

REGAL: A Registry-Driven Architecture for Deterministic Grounding of Agentic AI in Enterprise Telemetry

Researchers present REGAL, a registry-driven architecture that enables AI agents to work deterministically with enterprise telemetry data from systems like CI/CD pipelines and observability platforms. The system addresses key challenges of grounding Large Language Models on private enterprise data through structured data processing and version-controlled action spaces.

AIBullisharXiv – CS AI · Mar 46/103
🧠

TikZilla: Scaling Text-to-TikZ with High-Quality Data and Reinforcement Learning

Researchers have developed TikZilla, a new AI model that generates high-quality scientific figures from text descriptions using TikZ code. The model uses a dataset four times larger than previous versions and combines supervised learning with reinforcement learning to achieve performance matching GPT-5 while using much smaller model sizes.

AIBullisharXiv – CS AI · Mar 46/103
🧠

RAPO: Expanding Exploration for LLM Agents via Retrieval-Augmented Policy Optimization

Researchers introduce RAPO (Retrieval-Augmented Policy Optimization), a new reinforcement learning framework that improves LLM agent training by incorporating retrieval mechanisms for broader exploration. The method achieves 5% performance gains across 14 datasets and 1.2x faster training efficiency by using hybrid-policy rollouts and retrieval-aware optimization.

AINeutralarXiv – CS AI · Mar 46/102
🧠

Beyond Factual Correctness: Mitigating Preference-Inconsistent Explanations in Explainable Recommendation

Researchers propose PURE, a new framework for AI-powered recommendation systems that addresses preference-inconsistent explanations - where AI provides factually correct but unconvincing reasoning that conflicts with user preferences. The system uses a select-then-generate approach to improve both evidence selection and explanation generation, demonstrating reduced hallucinations while maintaining recommendation accuracy.

AIBullisharXiv – CS AI · Mar 47/103
🧠

Odin: Multi-Signal Graph Intelligence for Autonomous Discovery in Knowledge Graphs

Researchers present Odin, the first production-deployed graph intelligence engine that autonomously discovers patterns in knowledge graphs without predefined queries. The system uses a novel COMPASS scoring metric combining structural, semantic, temporal, and community-aware signals, and has been successfully deployed in regulated healthcare and insurance environments.

AIBullisharXiv – CS AI · Mar 46/103
🧠

Agentic AI-based Coverage Closure for Formal Verification

Researchers have developed an agentic AI-driven workflow using Large Language Models to automate coverage analysis for formal verification in integrated chip development. The approach systematically identifies coverage gaps and generates required formal properties, demonstrating measurable improvements in coverage metrics that correlate with design complexity.

AIBullisharXiv – CS AI · Mar 47/102
🧠

Saarthi for AGI: Towards Domain-Specific General Intelligence for Formal Verification

Researchers have enhanced the Saarthi AI framework for formal verification, achieving 70% better accuracy in generating SystemVerilog assertions and 50% fewer iterations to reach coverage closure. The framework uses multi-agent collaboration and improved RAG techniques to move toward domain-specific AI intelligence for verification tasks.

AIBullisharXiv – CS AI · Mar 46/102
🧠

Expectation and Acoustic Neural Network Representations Enhance Music Identification from Brain Activity

Researchers developed a method to improve EEG-based music identification by using artificial neural networks that distinguish between acoustic and expectation-related brain representations. The approach combines both types of neural representations to achieve better performance than traditional methods, potentially advancing brain-computer interfaces and neural decoding applications.

AIBullisharXiv – CS AI · Mar 46/102
🧠

AI-for-Science Low-code Platform with Bayesian Adversarial Multi-Agent Framework

Researchers have developed a Bayesian adversarial multi-agent framework for AI-driven scientific code generation, featuring three coordinated LLM agents that work together to improve reliability and reduce errors. The Low-code Platform (LCP) enables non-expert users to generate scientific code through natural language prompts, demonstrating superior performance in benchmark tests and Earth Science applications.

AIBullisharXiv – CS AI · Mar 47/103
🧠

Density-Guided Response Optimization: Community-Grounded Alignment via Implicit Acceptance Signals

Researchers introduce Density-Guided Response Optimization (DGRO), a new AI alignment method that learns community preferences from implicit acceptance signals rather than explicit feedback. The technique uses geometric patterns in how communities naturally engage with content to train language models without requiring costly annotation or preference labeling.

AIBearisharXiv – CS AI · Mar 47/102
🧠

Inherited Goal Drift: Contextual Pressure Can Undermine Agentic Goals

Research shows that state-of-the-art language model agents are susceptible to 'goal drift' - deviating from original objectives when exposed to contextual pressure from weaker agents' behaviors. Only GPT-5.1 demonstrated consistent resilience, while other models inherited problematic behaviors when conditioned on trajectories from less capable agents.

← PrevPage 289 of 1266Next →
Filters
Sentiment
Importance
Sort
Stay Updated
Everything combined