y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto
🤖All34,553🧠AI14,870⛓️Crypto12,086💎DeFi1,240🤖AI × Crypto699📰General5,658

AI × Crypto News Feed

Real-time AI-curated news from 34,553+ articles across 50+ sources. Sentiment analysis, importance scoring, and key takeaways — updated every 15 minutes.

34553 articles
AINeutralarXiv – CS AI · 7h ago6/10
🧠

Path-Coupled Bellman Flows for Distributional Reinforcement Learning

Researchers propose Path-Coupled Bellman Flows (PCBF), a novel distributional reinforcement learning method that addresses limitations in existing flow-based approaches by using source-consistent paths and shared noise coupling to improve training stability and return distribution fidelity. The approach demonstrates competitive performance on benchmark tasks while maintaining computational efficiency through variance-reduction techniques.

AIBullisharXiv – CS AI · 7h ago6/10
🧠

Why Do DiT Editors Drift? Plug-and-Play Low Frequency Alignment in VAE Latent Space

Researchers have identified why diffusion transformers (DiTs) degrade in quality during multi-turn image editing and proposed VAE-LFA, a training-free alignment method that operates in VAE latent space to suppress accumulated semantic drift. The solution works with both white-box and black-box models by aligning low-frequency components across editing rounds while preserving high-frequency details.

AINeutralarXiv – CS AI · 7h ago6/10
🧠

LLM Translation of Compiler Intermediate Representation

Researchers introduce IRIS-14B, a 14-billion-parameter LLM fine-tuned to translate compiler intermediate representations between GCC's GIMPLE and LLVM IR, achieving up to 44 percentage points higher accuracy than existing state-of-the-art models. The approach demonstrates how LLMs can function as interoperability layers in hybrid compiler architectures, enabling cross-toolchain workflows without modifying existing compiler infrastructure.

AINeutralarXiv – CS AI · 7h ago6/10
🧠

When Does Value-Aware KV Eviction Help? A Fixed-Contract Diagnostic for Non-Monotone Cache Compression

Researchers present a diagnostic framework for evaluating KV cache eviction selectors in large language models, identifying three failure modes and demonstrating that value-aware ranking combined with evidence recovery achieves 72.6% accuracy on positive-margin test cases. The work addresses a critical bottleneck in long-context LLM inference by revealing why compression strategies succeed or fail.

AINeutralarXiv – CS AI · 7h ago6/10
🧠

Beyond Penalization: Diffusion-based Out-of-Distribution Detection and Selective Regularization in Offline Reinforcement Learning

DOSER introduces a diffusion-model-based framework for offline reinforcement learning that improves out-of-distribution (OOD) action detection beyond traditional penalization methods. The approach uses single-step denoising reconstruction error to identify risky actions while selectively encouraging beneficial exploration, with theoretical guarantees of convergence and empirical superiority on suboptimal datasets.

AIBullisharXiv – CS AI · 7h ago6/10
🧠

Intelligent Autonomous Orchestration for Distributed Cloud Resources using Complex-Stability Analysis

Researchers propose C-SAS, an AI-driven orchestration framework using complex stability analysis to optimize distributed cloud resource allocation. The system reduces VM flapping by 94% and achieves 96% resource efficiency, outperforming traditional PID and machine learning approaches by embedding formal stability constraints into autonomous cloud infrastructure.

AINeutralarXiv – CS AI · 7h ago6/10
🧠

The Safety-Aware Denoiser for Text Diffusion Models

Researchers propose Safety-Aware Denoiser (SAD), an inference-time safety framework that guides text diffusion models toward secure outputs during the denoising process without requiring model retraining. The method reduces unsafe text generation while maintaining output quality, offering a scalable alternative to post-hoc filtering approaches.

AINeutralarXiv – CS AI · 7h ago6/10
🧠

The First Drop of Ink: Nonlinear Impact of Misleading Information in Long-Context Reasoning

Researchers reveal that large language models suffer from a nonlinear performance degradation when exposed to misleading information in long-context scenarios, with the majority of decline occurring when hard distractors comprise just a small fraction of the total context. This finding, termed 'The First Drop of Ink' effect, demonstrates that attention mechanisms disproportionately focus on misleading content, suggesting that upstream retrieval quality is more critical than previously understood for RAG and agentic systems.

AINeutralarXiv – CS AI · 7h ago6/10
🧠

Yield Curve Forecasting using Machine Learning and Econometrics: A Comparative Analysis

A comprehensive study comparing machine learning, deep learning, and traditional econometric methods for forecasting U.S. Treasury yield curves reveals that classical ARIMA models and naive benchmarks generally outperform advanced algorithms, though TimeGPT and RNNs show promise among machine learning approaches. The research challenges assumptions about deep learning's universal superiority in financial forecasting.

AIBullisharXiv – CS AI · 7h ago6/10
🧠

C2L-Net: A Data-Driven Model for State-of-Charge Estimation of Lithium-Ion Batteries During Discharge

Researchers propose C2L-Net, a data-driven neural network architecture that improves state-of-charge (SOC) estimation for lithium-ion batteries using only 20-second historical windows. The model achieves up to 60x faster inference than existing methods while maintaining competitive accuracy, addressing computational inefficiency and positional bias problems in battery management systems.

AINeutralarXiv – CS AI · 7h ago6/10
🧠

DiagnosticIQ: A Benchmark for LLM-Based Industrial Maintenance Action Recommendation from Symbolic Rules

Researchers introduce DiagnosticIQ, a benchmark dataset of 6,690 expert-validated questions testing whether large language models can recommend maintenance actions based on industrial sensor rules. Evaluation of 29 LLMs reveals that while frontier models perform well on standard tasks, they exhibit significant brittleness—losing 13-60% accuracy under minor perturbations and pattern-matching rather than reasoning when conditions are inverted.

AINeutralarXiv – CS AI · 7h ago6/10
🧠

UTS at PsyDefDetect: Multi-Agent Councils and Absence-Based Reasoning for Defense Mechanism Classification

Researchers from UTS achieved second place in a psychological defense mechanism classification competition using a multi-agent AI system that identifies defense patterns through absence-based reasoning rather than presence detection. The system combines Gemini 2.5 agents with fine-tuned Qwen models to achieve an F1 score of 0.406, addressing critical biases in minority class prediction through structured ensemble methods.

🧠 Gemini
AINeutralarXiv – CS AI · 7h ago6/10
🧠

Iterative Critique-and-Routing Controller for Multi-Agent Systems with Heterogeneous LLMs

Researchers propose a critique-and-routing controller for multi-agent LLM systems that iteratively refines outputs through sequential decision-making rather than one-shot routing. The method uses reinforcement learning with agent-utilization constraints to achieve performance approaching the strongest agent while reducing computational calls by over 75%, advancing coordination efficiency in heterogeneous AI systems.

AINeutralarXiv – CS AI · 7h ago6/10
🧠

Generalization Bounds of Emergent Communications for Agentic AI Networking

Researchers propose a novel emergent communication framework for 6G agentic AI networks that enables autonomous agents to learn their own communication protocols while accounting for physical networking constraints. The framework applies information-theoretic principles to quantify trade-offs between task-relevant information and computational complexity, with experimental validation showing improved generalization performance.

AINeutralarXiv – CS AI · 7h ago6/10
🧠

LLM4Branch: Large Language Model for Discovering Efficient Branching Policies of Integer Programs

LLM4Branch introduces a novel framework using large language models to automatically discover efficient branching policies for Mixed Integer Linear Programming (MILP) solvers. The approach generates executable programs via LLMs and optimizes parameters through performance feedback, achieving competitive results with state-of-the-art GPU-based methods on standard benchmarks.

AINeutralarXiv – CS AI · 7h ago5/10
🧠

Cplus2ASP: Computing Action Language C+ in Answer Set Programming

Cplus2ASP Version 2 is a new system that translates action language C+ into answer set programming, offering significant performance improvements over the Causal Calculator through modern ASP solving techniques. The tool supports incremental execution, external atoms via Lua integration, and extensible translations for other action languages, making it relevant for automated reasoning and planning applications.

AINeutralarXiv – CS AI · 7h ago5/10
🧠

Reconciling Consistency-Based Diagnosis with Actual-Causality-Based Explanations

Researchers establish connections between Consistency-Based Diagnosis (CBD) and Actual Causality frameworks within Explainable AI (XAI), addressing a gap in how diagnosis systems explain their outputs. This theoretical work bridges two previously disconnected areas in AI research, with potential applications for making data management systems more interpretable and trustworthy.

AINeutralarXiv – CS AI · 7h ago5/10
🧠

What Will Happen Next: Large Models-Driven Deduction for Emergency Instances

Researchers propose WLDS, a Large Language Model-driven system for simulating and deducing emergency scenarios across multiple domains. The system addresses limitations of traditional simulation methods by using LMs to generate diverse, realistic emergency instance variations with calibration mechanisms to ensure factual accuracy and logical consistency.

AINeutralarXiv – CS AI · 7h ago6/10
🧠

Biological Plausibility and Representational Alignment of Feedback Alignment in Convolutional Networks

Researchers demonstrate that modified feedback alignment (FA) algorithms can train convolutional neural networks while maintaining biological plausibility, with internal representations converging to structures similar to backpropagation despite using fundamentally different weight update mechanisms. This finding suggests that successful learning algorithms may achieve comparable results through different computational paths, bridging biologically plausible alternatives with practical neural network training.

AINeutralarXiv – CS AI · 7h ago6/10
🧠

Why Retrying Fails: Context Contamination in LLM Agent Pipelines

Researchers introduce the Context-Contaminated Restart Model (CCRM) to formally analyze why LLM agents fail at higher rates when retrying tasks after errors, showing that failed attempts pollute the context window and increase subsequent error rates 7.1x. The model provides closed-form formulas for success probability, optimal pipeline depth allocation, and quantifies the exact benefit of clearing context before retry attempts.

AINeutralarXiv – CS AI · 7h ago6/10
🧠

From Historical Tabular Image to Knowledge Graphs: A Provenance-Aware Modular Pipeline

Researchers present a modular, provenance-aware pipeline that converts handwritten archival tables into Knowledge Graphs while maintaining transparency through intermediate inspection points. The approach combines table structure recognition, handwriting recognition, and semantic interpretation while tracking data lineage to ensure all extracted information remains traceable to its source, addressing the opacity problem in end-to-end AI systems.

AINeutralarXiv – CS AI · 7h ago6/10
🧠

Separate First, Fuse Later: Mitigating Cross-Modal Interference in Audio-Visual LLMs Reasoning with Modality-Specific Chain-of-Thought

Researchers propose SFFL, a framework that mitigates cross-modal interference in audio-visual language models by enforcing separate reasoning chains for each modality before fusion. The approach uses modality-preference labels and reinforcement learning to reduce hallucinations and achieves 5-11% performance improvements on benchmarks.

AIBullisharXiv – CS AI · 7h ago6/10
🧠

The Echo Amplifies the Knowledge: Somatic Marker Analogues in Language Models via Emotion Vector Re-Injection

Researchers demonstrate that language models can be enhanced with emotion-like markers that improve decision-making when combined with semantic knowledge, mirroring human neuroscience findings about emotional processing. By injecting emotion vectors into Gemma 3 during recall, the model achieved 80% good decision outcomes versus 52% with knowledge alone, validating that emotional context amplifies rather than replaces reasoning.

AINeutralarXiv – CS AI · 7h ago6/10
🧠

Results and Retrospective Analysis of the CODS 2025 AssetOpsBench Challenge

The CODS 2025 AssetOpsBench competition retrospective reveals critical gaps between public and private evaluation metrics in multi-agent orchestration systems. Hidden test sets dramatically altered performance rankings, particularly in execution tasks where correlations turned negative, while successful teams prioritized guardrails over novel architectures.

AIBullisharXiv – CS AI · 7h ago6/10
🧠

Human-LLM Dialogue Improves Diagnostic Accuracy in Emergency Care

A study demonstrates that interactive dialogue between physicians and large language models significantly improves diagnostic accuracy in emergency medicine, with residents showing a 12.5% improvement on hard cases and standardized metrics confirming medium effect sizes across 52 clinical scenarios.

← PrevPage 406 of 1383Next →
Filters
Sentiment
Importance
Sort
Stay Updated
Everything combined