y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto
🤖All32,718🧠AI13,894⛓️Crypto11,848💎DeFi1,217🤖AI × Crypto640📰General5,119

AI × Crypto News Feed

Real-time AI-curated news from 32,888+ articles across 50+ sources. Sentiment analysis, importance scoring, and key takeaways — updated every 15 minutes.

32888 articles
AINeutralarXiv – CS AI · 5h ago6/10
🧠

Finite-Time Analysis of MCTS in Continuous POMDP Planning

Researchers present the first finite-time theoretical analysis of Monte Carlo Tree Search (MCTS) applied to Partially Observable Markov Decision Processes (POMDPs), bridging a critical gap in algorithmic guarantees. The paper introduces Voro-POMCPOW, which uses Voronoi cell partitioning for continuous observation spaces, proving high-probability bounds on value estimates while maintaining competitive empirical performance.

AINeutralarXiv – CS AI · 5h ago5/10
🧠

Online Goal Recognition using Path Signature and Dynamic Time Warping

Researchers introduce a novel online goal recognition method using path signatures and dynamic time warping to efficiently encode and compare continuous trajectory data. The approach demonstrates superior predictive accuracy and planning efficiency compared to existing state-of-the-art methods while maintaining competitive offline performance.

AINeutralarXiv – CS AI · 5h ago6/10
🧠

Tacit Knowledge Extraction via Logic Augmented Generation and Active Inference

Researchers introduce a neuro-symbolic framework combining Logic-Augmented Generation and Active Inference to extract and formalize tacit knowledge into machine-interpretable Knowledge Graphs. The approach addresses a critical gap in knowledge engineering by capturing implicit assumptions and contextual expertise from procedural domains like manufacturing, demonstrated through analysis of assembly repair videos.

AINeutralarXiv – CS AI · 5h ago6/10
🧠

Open-Ended Task Discovery via Bayesian Optimization

Researchers introduce Generate-Select-Refine (GSR), a Bayesian optimization framework that dynamically discovers and refines tasks during scientific workflows rather than optimizing fixed objectives. The approach demonstrates superior performance across product development, chemical synthesis, algorithm analysis, and patent repurposing compared to existing LLM-based optimizers.

AINeutralarXiv – CS AI · 5h ago6/10
🧠

The Limits of AI-Driven Allocation: Optimal Screening under Aleatoric Uncertainty

Researchers present a framework for optimally combining algorithmic risk scoring with direct verification screening in resource allocation decisions. The study demonstrates that even perfect predictive models cannot eliminate misallocation due to irreducible uncertainty about individual vulnerability, and shows that screening is most effective when focused on borderline cases rather than high-risk units.

AINeutralarXiv – CS AI · 5h ago6/10
🧠

Alternating Target-Path Planning for Scalable Multi-Agent Coordination

Researchers propose a decoupled iterative framework for multi-agent coordination that separates target assignment from pathfinding, achieving better scalability than existing conflict-based approaches. The method leverages fast suboptimal solvers like LaCAM and feedback-driven reassignment to handle larger agent systems while maintaining acceptable solution quality.

AIBullisharXiv – CS AI · 5h ago6/10
🧠

Hierarchical Task Network Planning with LLM-Generated Heuristics

Researchers demonstrate that large language models can generate effective heuristics for hierarchical task network (HTN) planning, achieving near-optimal performance compared to state-of-the-art planners. LLM-generated heuristics reduce search effort on 83% of benchmark problems, suggesting AI models can enhance algorithmic planning efficiency beyond classical approaches.

AINeutralarXiv – CS AI · 5h ago6/10
🧠

SREGym: A Live Benchmark for AI SRE Agents with High-Fidelity Failure Scenarios

SREGym is a new open-source benchmark platform that enables realistic evaluation of AI agents designed to diagnose and fix failures in production systems. The framework simulates high-fidelity failure scenarios across cloud-native stacks and currently includes 90 SRE problems, revealing significant performance variations among frontier AI models.

AINeutralarXiv – CS AI · 5h ago6/10
🧠

AgentEscapeBench: Evaluating Out-of-Domain Tool-Grounded Reasoning in LLM Agents

Researchers introduced AgentEscapeBench, a benchmark that evaluates how well LLM-based agents can reason through complex, multi-step tasks requiring external tool use and long-range dependency tracking. Testing 16 LLM agents against 270 escape-room-style problems revealed significant performance degradation as task complexity increased, with the best models dropping from 90% success to 60% as dependency depth tripled, highlighting a critical limitation in current AI agent capabilities.

AINeutralarXiv – CS AI · 5h ago5/10
🧠

Exact Regular-Constrained Variable-Order Markov Generation via Sparse Context-State Belief Propagation

Researchers present a novel computational method for generating sequences constrained by regular automata using variable-order Markov models. The advancement eliminates the need to expand full K-tuple state spaces while maintaining exact inference, achieving linear complexity for fixed models and enabling efficient constrained sequence generation across applications.

AINeutralarXiv – CS AI · 5h ago6/10
🧠

From Pixels to Prompts: Vision-Language Models

A new educational resource aims to demystify Vision-Language Models (VLMs) by providing a structured framework for understanding how these systems combine image recognition and language processing. Rather than cataloging every model variant, the work focuses on building intuitive mental models that enable developers and researchers to understand VLMs conceptually and apply them effectively.

AINeutralarXiv – CS AI · 5h ago6/10
🧠

MEMOREPAIR: Barrier-First Cascade Repair in Agentic Memory

Researchers introduce MemoRepair, a system that addresses cascade failures in agentic memory by preventing stale or invalidated information from corrupting downstream AI agent decisions. Using a barrier-first approach and graph-based optimization, the system reduces invalid memory exposure from 69-94% to 0% while maintaining 91-94% of valid successor states with significantly lower repair costs.

AINeutralarXiv – CS AI · 5h ago6/10
🧠

When Does a Language Model Commit? A Finite-Answer Theory of Pre-Verbalization Commitment

Researchers developed a method to measure when language models stabilize their answer preferences during generation, before explicitly verbalizing a final answer. Using finite-answer projection analysis on the Qwen3-4B-Instruct model, they found answer preferences stabilize 17-31 tokens before the model states its answer, revealing the internal commitment dynamics of LLM reasoning.

AINeutralarXiv – CS AI · 5h ago6/10
🧠

Hidden Coalitions in Multi-Agent AI: A Spectral Diagnostic from Internal Representations

Researchers introduce a spectral diagnostic method to detect hidden coalitions in multi-agent AI systems by analyzing mutual information patterns in internal neural representations rather than observable behavior. The technique successfully identifies hierarchical and dynamic coalition structures in reinforcement learning and language models, providing a scalable tool for monitoring emergent organization in distributed AI systems.

AINeutralarXiv – CS AI · 5h ago6/10
🧠

State Representation and Termination for Recursive Reasoning Systems

Researchers present a formal framework for recursive reasoning systems that addresses two critical design challenges: how to represent evolving reasoning states and when to terminate iteration. The paper introduces an epistemic state graph representation and proposes the 'order-gap' metric as a stopping criterion, with theoretical guarantees for when this criterion provides meaningful guidance.

AINeutralarXiv – CS AI · 5h ago5/10
🧠

Fast and Effective Redistricting Optimization via Composite-Move Tabu Search

Researchers present CM-Tabu, a composite-move Tabu search algorithm that solves spatial redistricting optimization problems more effectively by expanding the feasible solution space while maintaining district contiguity constraints. The method uses graph analysis to identify minimal unit movements or swaps that preserve connectivity, achieving superior solution quality and computational efficiency compared to traditional approaches.

AIBullisharXiv – CS AI · 5h ago6/10
🧠

GraphDC: A Divide-and-Conquer Multi-Agent System for Scalable Graph Algorithm Reasoning

Researchers introduce GraphDC, a divide-and-conquer multi-agent framework that enables Large Language Models to solve complex graph algorithms more effectively by decomposing large graphs into smaller subgraphs for specialized agent reasoning. The approach significantly improves LLM performance on graph algorithmic tasks, particularly on larger instances where traditional end-to-end reasoning fails.

AIBullisharXiv – CS AI · 5h ago6/10
🧠

Reason to Play: Behavioral and Brain Alignment Between Frontier LRMs and Human Game Learners

Researchers compared frontier Large Reasoning Models (LRMs) with traditional AI systems using human gameplay data paired with fMRI brain recordings. LRMs demonstrated superior alignment with human learning behavior and predicted brain activity an order of magnitude better than reinforcement learning alternatives, suggesting they more closely mirror human cognition during complex decision-making.

AINeutralarXiv – CS AI · 5h ago6/10
🧠

Abductive Reasoning with Probabilistic Commonsense

Researchers propose PACS, a probabilistic framework for abductive reasoning that models how commonsense beliefs vary across individuals rather than assuming universal agreement. By combining LLMs with formal solvers to sample diverse proofs and aggregate conclusions, PACS outperforms existing reasoning approaches on multiple benchmarks, addressing a fundamental limitation in neurosymbolic AI systems.

AINeutralarXiv – CS AI · 5h ago6/10
🧠

TraceFix: Repairing Agent Coordination Protocols with TLA+ Counterexamples

TraceFix is a verification-first framework that uses TLA+ model checking to automatically repair and validate multi-agent LLM coordination protocols, achieving 100% verification success on 48 test tasks with 62.5% passing on first attempt. The approach reduces deadlock/livelock failures from 31.1% to 14.1% and improves task completion rates to 89.4% compared to unverified baselines.

AINeutralarXiv – CS AI · 5h ago6/10
🧠

A Rod Flow Model for Adam at the Edge of Stability

Researchers extend rod flow modeling to Adam and other adaptive gradient methods, enabling more accurate continuous-time analysis of optimizer behavior at the edge of stability. This advancement bridges a gap in theoretical understanding of momentum-based optimization algorithms critical to modern deep learning.

AINeutralarXiv – CS AI · 5h ago6/10
🧠

Distributional Process Reward Models: Calibrated Prediction of Future Rewards via Conditional Optimal Transport

Researchers propose using conditional optimal transport to improve calibration of Process Reward Models (PRMs) used in AI inference-time scaling, addressing the problem of overestimated success probabilities. The method enables better confidence bounds for mathematical reasoning tasks and improves downstream performance in Best-of-N selection frameworks.

AINeutralarXiv – CS AI · 5h ago5/10
🧠

A Linear-Transformer Hybrid for SNP-Based Genotype-to-Phenotype Prediction in Grapevine

Researchers developed LiT-G2P, a hybrid machine learning model combining linear genetic effects with Transformer-based neural networks to predict plant traits from DNA sequences in grapevines. The approach achieved superior prediction accuracy for leaf and trichome density across multiple years, demonstrating practical applications for genomic selection in agricultural breeding.

AINeutralarXiv – CS AI · 5h ago6/10
🧠

Why DDIM Hallucinates More than DDPM: A Theoretical Analysis of Reverse Dynamics

Researchers provide theoretical analysis demonstrating that DDIM (deterministic diffusion model) generates more hallucinations than DDPM (stochastic diffusion model) when sampling from multi-modal distributions. The study proves that stochastic noise in DDPM helps escape local modes, while DDIM can become trapped between modes, with implications for improving generative AI sampling algorithms.

AIBullisharXiv – CS AI · 5h ago6/10
🧠

VITA-QinYu: Expressive Spoken Language Model for Role-Playing and Singing

Researchers unveiled VITA-QinYu, an expressive spoken language model that extends beyond natural conversation to generate role-playing and singing through a hybrid speech-text architecture. The model achieves state-of-the-art performance on conversational benchmarks while demonstrating superior expressiveness in non-conversational tasks, with researchers open-sourcing the code and providing a streaming-capable demo.

← PrevPage 380 of 1316Next →
Filters
Sentiment
Importance
Sort
Stay Updated
Everything combined