y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto
🤖All35,019🧠AI14,870⛓️Crypto12,086💎DeFi1,240🤖AI × Crypto699📰General6,124

AI × Crypto News Feed

Real-time AI-curated news from 34,840+ articles across 50+ sources. Sentiment analysis, importance scoring, and key takeaways — updated every 15 minutes.

34840 articles
AINeutralarXiv – CS AI · 1d ago6/10
🧠

The Reciprocity Gradient

Researchers introduce the reciprocity gradient, a novel machine learning method that addresses the influence attribution problem in multi-agent strategic interactions. The approach backpropagates reward signals through estimated opponent policies without requiring reward shaping, enabling agents to learn context-sensitive cooperation strategies that outperform sample-based baselines.

AINeutralarXiv – CS AI · 1d ago6/10
🧠

Neuroscience-Inspired Analyses of Visual Interestingness in Multimodal Transformers

Researchers analyzed how Qwen3-VL-8B, a multimodal transformer, encodes visual interestingness—a measure derived from human engagement data—without explicit supervision. Using neuroscience-inspired methods, they found that the model's internal representations align with human-derived interestingness scores, suggesting transformers may capture principles of human attention and perception.

AINeutralarXiv – CS AI · 1d ago6/10
🧠

Interpretable Machine Learning for Football Performance Analysis: Evidence of Limited Transferability from Elite Leagues to University Competition

Researchers found that machine learning models trained on elite European football leagues lose interpretability and reliability when applied to university-level competition, suggesting that performance insights don't transfer across competition tiers. The study reveals that explanation stability and feature importance hierarchies are domain-dependent, challenging the assumption that ML-derived performance determinants are universally applicable.

AINeutralarXiv – CS AI · 1d ago6/10
🧠

Hierarchical Mixture-of-Experts with Two-Stage Optimization

Researchers introduce Hi-MoE, a hierarchical Mixture-of-Experts framework that addresses a fundamental routing trade-off in sparse MoE models by implementing two-stage optimization: inter-group load balancing and intra-group expert specialization. Tested on large-scale NLP and vision tasks, Hi-MoE achieves 5.6% perplexity improvements and superior expert balance compared to existing methods.

🏢 Meta🏢 Perplexity
AINeutralarXiv – CS AI · 1d ago6/10
🧠

Toward Optimal Regret in Robust Pricing: Decoupling Corruption and Time

Researchers have resolved a longstanding open problem in robust dynamic pricing by developing a binary search variant that achieves decoupled regret bounds of O(C + log T) when corruption is known and O(C + log² T) when unknown, significantly improving upon the previous O(C log log T) bound from 2025.

AINeutralarXiv – CS AI · 1d ago6/10
🧠

PruneTIR: Inference-Time Tool Call Pruning for Effective yet Efficient Tool-Integrated Reasoning

Researchers introduce PruneTIR, an inference-time optimization framework that improves tool-integrated reasoning in large language models by pruning failed trajectories, resampling tool calls, and suspending tool usage when errors persist. The approach enhances LLM performance without requiring additional training, demonstrating significant improvements in accuracy and efficiency.

AINeutralarXiv – CS AI · 1d ago6/10
🧠

What Cohort INRs Encode and Where to Freeze Them

Researchers demonstrate that early layers of cohort-trained Implicit Neural Representations (INRs) encode transferable features for signal fitting, identifying optimal freezing points through weight stable rank analysis. Using sparse autoencoders for mechanistic interpretability, they reveal that SIREN and Fourier-feature MLPs learn fundamentally different dictionary representations despite comparable performance, with implications for designing more generalizable neural architectures.

AINeutralarXiv – CS AI · 1d ago6/10
🧠

Graph Computation Meets Circuit Algebra: A Task-Aligned Analysis of Graph Neural Networks for Electronic Design Automation

This research paper presents a task-aligned framework for applying Graph Neural Networks (GNNs) to Electronic Design Automation (EDA) problems, arguing that successful implementations require architectural alignment with the underlying mathematics of each specific chip design task. The authors systematize how different EDA challenges—from timing analysis to routing and power delivery—demand distinct GNN computation patterns, identifying current mismatches and failure modes that will likely shape future development.

AINeutralarXiv – CS AI · 1d ago6/10
🧠

UMEDA: Unified Multi-modal Efficient Data Fusion for Privacy-Preserving Graph Federated Learning via Spectral-Gated Attention and Diffusion-Based Operator Alignment

Researchers introduce UMEDA, a federated learning framework designed to enable device-free localization across heterogeneous sensors while maintaining privacy. The system uses spectral signal processing and diffusion-based aggregation to align data from different sensor modalities without requiring direct node correspondence, achieving superior performance on multi-modal benchmarks under privacy constraints.

AINeutralarXiv – CS AI · 1d ago6/10
🧠

HapticLDM: A Diffusion Model for Text-to-Vibrotactile Generation

Researchers introduce HapticLDM, a diffusion model that generates haptic feedback from text descriptions, outperforming previous autoregressive approaches in realism and semantic accuracy. The breakthrough enables more efficient vibration design for metaverse, gaming, and film applications by improving how AI converts natural language into precise vibrotactile experiences.

AINeutralarXiv – CS AI · 1d ago5/10
🧠

Novel GPU Boruta algorithms for feature selection from high-dimensional data

Researchers have developed GPU-accelerated versions of the Boruta feature selection algorithm, significantly improving computational efficiency for processing large-scale datasets while maintaining accuracy comparable to the original CPU-based method. The two variants—Boruta-Permut and Boruta-TreeImp—demonstrate that GPU acceleration offers a cost-effective solution for machine learning workflows on high-dimensional data.

AINeutralarXiv – CS AI · 1d ago6/10
🧠

What If We Let Forecasting Forget? A Sparse Bottleneck for Cross-Variable Dependencies

Researchers introduce MS-FLOW, a machine learning framework that improves multivariate time series forecasting by using sparse, selective connections between variables rather than dense interactions. The approach addresses the problem of spurious correlations that plague existing methods, achieving state-of-the-art accuracy on 12 benchmarks while identifying fewer but more reliable dependencies.

AINeutralarXiv – CS AI · 1d ago6/10
🧠

Iterative Critique-and-Routing Controller for Multi-Agent Systems with Heterogeneous LLMs

Researchers propose a critique-and-routing controller for multi-agent LLM systems that iteratively refines outputs through sequential decision-making rather than one-shot routing. The method uses reinforcement learning with agent-utilization constraints to achieve performance approaching the strongest agent while reducing computational calls by over 75%, advancing coordination efficiency in heterogeneous AI systems.

AIBullisharXiv – CS AI · 1d ago6/10
🧠

Verifier-Free RL for LLMs via Intrinsic Gradient-Norm Reward

Researchers propose VIGOR, a verifier-free reinforcement learning method for large language models that eliminates dependency on gold labels or domain-specific verifiers by using gradient-norm measurements as intrinsic reward signals. The approach demonstrates measurable improvements over existing baselines on mathematical reasoning and exhibits cross-domain transfer to code tasks, addressing a major scalability constraint in current RL-based LLM training.

AINeutralarXiv – CS AI · 1d ago6/10
🧠

Diagnosing Spectral Ceilings in Equivariant Neural Force Fields

Researchers introduce a spectral-injection diagnostic method to measure which angular frequencies equivariant neural force fields can preserve, revealing sharp performance cliffs at theoretical capacity boundaries. Testing on aspirin with NequIP backbones shows a dramatic 11.7x performance drop at the predicted boundary, validated across multiple architectures and calibrated through polynomial span theorems.

AIBullisharXiv – CS AI · 1d ago6/10
🧠

HTPO: Towards Exploration-Exploitation Balanced Policy Optimization via Hierarchical Token-level Objective Control

Researchers introduce HTPO, a novel reinforcement learning algorithm that optimizes Large Language Models by assigning different learning objectives to different tokens based on their functional roles in reasoning tasks. The method achieves significant performance improvements on challenging benchmarks like AIME, demonstrating that granular token-level control can better balance exploration and exploitation in AI training.

AINeutralarXiv – CS AI · 1d ago6/10
🧠

Magis-Bench: Evaluating LLMs on Magistrate-Level Legal Tasks

Researchers introduced Magis-Bench, a new benchmark for evaluating large language models on magistrate-level judicial tasks based on Brazilian competitive exams. Testing 23 state-of-the-art LLMs revealed that even top performers like Google's Gemini-3-Pro-Preview score below 70% on complex legal reasoning and judicial writing tasks, indicating significant gaps in AI legal capabilities.

🧠 Claude🧠 Gemini
AINeutralarXiv – CS AI · 1d ago6/10
🧠

A Qualitative Test-Risk Mechanism for Scaling Behavior in Normalized Residual Networks

Researchers present a theoretical framework explaining how depth expansion in normalized residual networks improves test performance as models scale. The work decomposes scaling behavior into representational gain, optimization gain, and generalization transfer, providing formal guarantees that adding residual blocks can reduce test risk under specific conditions.

AINeutralarXiv – CS AI · 1d ago6/10
🧠

Hierarchical Causal Abduction: A Foundation Framework for Explainable Model Predictive Control

Researchers present Hierarchical Causal Abduction (HCA), a framework that makes Model Predictive Control decisions interpretable by combining physics-informed reasoning, optimization evidence, and causal discovery. The method achieves 53% higher explanation accuracy than existing approaches across industrial control applications, addressing a critical barrier to deploying AI in safety-critical infrastructure.

← PrevPage 424 of 1394Next →
Filters
Sentiment
Importance
Sort
Stay Updated
Everything combined