y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto
🤖All33,540🧠AI14,285⛓️Crypto12,043💎DeFi1,234🤖AI × Crypto686📰General5,292

AI × Crypto News Feed

Real-time AI-curated news from 33,549+ articles across 50+ sources. Sentiment analysis, importance scoring, and key takeaways — updated every 15 minutes.

33549 articles
AINeutralarXiv – CS AI · 1d ago6/10
🧠

Predictive but Not Plannable: RC-aux for Latent World Models

Researchers present RC-aux, a lightweight auxiliary objective that improves latent world models for planning by addressing the spatiotemporal mismatch between short-horizon prediction training and long-horizon planning deployment. The method adds multi-horizon prediction and budget-conditioned reachability supervision to align learned representations with planning requirements, demonstrating improvements on goal-conditioned control tasks.

AINeutralarXiv – CS AI · 1d ago6/10
🧠

MELD: Multi-Task Equilibrated Learning Detector for AI-Generated Text

Researchers introduce MELD, an advanced AI-generated text detector that uses multi-task learning to improve robustness against adversarial attacks, transfer across unseen models and domains, and maintain low false-positive rates. The detector outperforms most open-source competitors and matches leading commercial systems on public benchmarks.

AIBullisharXiv – CS AI · 1d ago6/10
🧠

Rethinking Importance Sampling in LLM Policy Optimization: A Cumulative Token Perspective

Researchers propose CTPO (Cumulative Token Policy Optimization), a new approach to reinforcement learning for large language models that addresses the bias-variance tradeoff in importance sampling ratios. By using cumulative token-level ratios with position-adaptive clipping, CTPO achieves superior performance on mathematical reasoning benchmarks compared to existing methods like PPO and GRPO.

AIBullisharXiv – CS AI · 1d ago6/10
🧠

Knowledge Transfer Scaling Laws for 3D Medical Imaging

Researchers demonstrate that different 3D medical imaging domains (CT, MRI, PET) transfer knowledge asymmetrically during pretraining, following predictable power-law patterns. By optimizing data allocation based on these transfer dynamics, they achieve up to 58% performance gains over proportional sampling, revealing a hub-and-island structure where certain domains act as foundational knowledge sources for others.

AINeutralarXiv – CS AI · 1d ago6/10
🧠

TEA-Bench: A Systematic Benchmarking of Tool-enhanced Emotional Support Dialogue Agent

Researchers introduce TEA-Bench, the first interactive benchmark for evaluating how external tools improve emotional support conversation (ESC) systems. Testing nine LLMs reveals that tool augmentation reduces hallucination and improves support quality, but effectiveness depends heavily on model capacity—stronger models leverage tools more effectively than weaker ones.

AIBullisharXiv – CS AI · 1d ago6/10
🧠

RELO: Reinforcement Learning to Localize for Visual Object Tracking

Researchers introduce RELO, a reinforcement learning method for visual object tracking that replaces traditional handcrafted spatial priors with a learned localization policy optimized directly for tracking metrics like IoU and AUC. The approach achieves state-of-the-art results on LaSOText benchmarks, demonstrating that reward-driven localization outperforms conventional prior-based methods.

AINeutralarXiv – CS AI · 1d ago6/10
🧠

Escaping the Diversity Trap in Robotic Manipulation via Anchor-Centric Adaptation

Researchers identify a critical flaw in robotic manipulation training: collecting diverse single-shot demonstrations paradoxically degrades performance due to estimation noise. Their proposed Anchor-Centric Adaptation (ACA) framework prioritizes repeated demonstrations at core tasks before expanding coverage, significantly improving robot reliability under strict data budgets.

AINeutralarXiv – CS AI · 1d ago6/10
🧠

Mage: Multi-Axis Evaluation of LLM-Generated Executable Game Scenes Beyond Compile-Pass Rate

Researchers introduce Mage, a multi-axis evaluation framework that reveals compile-pass rate is a misleading metric for assessing LLM-generated code in complex domains. Testing across four open-weight language models on game scene synthesis, they find direct code generation achieves 43% runtime success but produces structurally invalid outputs, while IR-conditioned approaches recover functional correctness at the cost of lower raw execution rates.

AINeutralarXiv – CS AI · 1d ago6/10
🧠

TSRBench: A Comprehensive Multi-task Multi-modal Time Series Reasoning Benchmark for Generalist Models

TSRBench introduces a comprehensive benchmark with 4,125 problems across 14 domains to evaluate how well AI models perform at time series reasoning tasks. Testing 30+ leading models reveals that current LLMs and multimodal models struggle with numerical forecasting despite strong semantic understanding, and fail to effectively combine textual and visual data inputs.

AIBullisharXiv – CS AI · 1d ago6/10
🧠

BalCapRL: A Balanced Framework for RL-Based MLLM Image Captioning

Researchers introduce BalCapRL, a reinforcement learning framework that improves multimodal image captioning by balancing three competing objectives: utility-aware correctness, reference coverage, and linguistic quality. The method achieves significant performance gains across multiple models by applying reward-decoupled normalization and length-conditional masking, addressing the trade-offs present in existing captioning approaches.

AINeutralarXiv – CS AI · 1d ago6/10
🧠

A Hybrid Graph Neural Network for Enhanced EEG-Based Depression Detection

Researchers propose a Hybrid Graph Neural Network (HGNN) for improved EEG-based depression detection that combines fixed and adaptive graph connections to capture both common and individualized brain patterns. The model incorporates a hierarchical pooling mechanism to extract patient-specific brain network information, achieving state-of-the-art results on public datasets.

AIBullisharXiv – CS AI · 1d ago6/10
🧠

AgentProg: Empowering Long-Horizon GUI Agents with Program-Guided Context Management

AgentProg introduces a novel program-guided context management system for long-horizon GUI agents that addresses the critical bottleneck of expanding interaction history overhead. By reframing interaction history as structured programs with variables and control flow, the approach preserves semantic information while reducing context requirements, achieving state-of-the-art performance on AndroidWorld benchmarks while maintaining robustness on extended tasks.

AIBullisharXiv – CS AI · 1d ago6/10
🧠

Tracking Large-scale Shared Bikes with Inertial Motion Learning in GNSS Blocked Environments

Researchers propose an inertial motion learning framework for tracking shared bikes in GNSS-denied environments like urban canyons, combining mechanical constraints with mixture-of-experts models to achieve 12% accuracy improvements over baselines. The system leverages pedaling behavior patterns to dynamically calibrate wheel speed estimates, demonstrating practical viability through real-world deployment data from DiDi's bike-sharing platform.

AINeutralarXiv – CS AI · 1d ago6/10
🧠

How to Compress KV Cache in RL Post-Training? Shadow Mask Distillation for Memory-Efficient Alignment

Researchers propose Shadow Mask Distillation to address the memory bottleneck created by KV cache compression during reinforcement learning post-training of large language models. The technique tackles the critical off-policy bias that emerges when compressed contexts are used during rollout generation while full contexts are used for parameter updates, a problem that amplifies instability in RL optimization.

AIBullisharXiv – CS AI · 1d ago6/10
🧠

TTF: Temporal Token Fusion for Efficient Video-Language Model

Researchers introduce Temporal Token Fusion (TTF), a training-free compression technique that reduces visual tokens in video-language models by 67% while maintaining 99.5% accuracy. The method addresses the critical bottleneck of LLM prefill costs in video understanding by identifying and fusing redundant tokens across video frames using local similarity matching.

AIBullisharXiv – CS AI · 1d ago6/10
🧠

Automated Evaluation can Distinguish the Good and Bad AI Responses to Patient Questions about Hospitalization

Researchers demonstrate that automated evaluation metrics can reliably assess AI-generated responses to patient hospitalization questions, matching human expert ratings across 2,800 responses from 28 AI systems. This approach addresses the scalability limitations of manual expert review while maintaining accuracy across three key dimensions: question answering, clinical evidence use, and medical knowledge application.

AINeutralarXiv – CS AI · 1d ago6/10
🧠

A Statistical Framework for Algorithmic Collective Action with Multiple Collectives

Researchers propose the first statistical framework for Algorithmic Collective Action (ACA) involving multiple independent collectives attempting to coordinate changes in shared data to influence AI classifier behavior. The framework provides computable bounds on collective success while accounting for varying sizes, strategies, and goal alignment across groups, with applications to climate adaptation in smart cities.

AINeutralarXiv – CS AI · 1d ago6/10
🧠

Benchmarking World-Model Learning with Environment-Level Queries

Researchers introduce WorldTest, a new evaluation protocol for assessing whether AI agents learn general-purpose world models capable of answering diverse environment-level queries. AutumnBench, an instantiation of this framework, benchmarks 43 grid-world environments across 129 tasks and reveals that frontier AI models significantly underperform humans, with gaps attributed to differences in exploration and belief-updating strategies.

AINeutralarXiv – CS AI · 1d ago6/10
🧠

Geometric Kolmogorov--Arnold Network (GeoKAN)

Researchers introduce Geometric Kolmogorov-Arnold Networks (GeoKANs), an advancement in KAN-type neural networks that learn geometry-adapted coordinate systems rather than relying on fixed Euclidean inputs. By adapting a diagonal Riemannian metric during training, GeoKAN redistributes computational capacity toward regions of rapid variation, making it particularly effective for physics-informed learning and differential equation problems.

AIBearisharXiv – CS AI · 1d ago6/10
🧠

Are LLM Agents Behaviorally Coherent? Latent Profiles for Social Simulation

Researchers found that Large Language Models lack behavioral coherence across different experimental settings, despite generating responses similar to humans. While LLMs can mimic human survey answers, they fail to maintain consistent behavioral profiles when tested conversationally, revealing a critical limitation for their use as substitutes in human-subject research.

AINeutralarXiv – CS AI · 1d ago6/10
🧠

A Multi-Memory Segment System for Generating High-Quality Long-Term Memory Content in Agents

Researchers propose a Multi-Memory Segment System (MMS) that improves how AI agents generate and store long-term memories by moving beyond simple summarization. The system creates structured retrieval and contextual memory units inspired by cognitive psychology, enabling more effective historical data utilization and response quality in agent interactions.

AINeutralarXiv – CS AI · 1d ago6/10
🧠

Skip-It? Theoretical Conditions for Layer Skipping in Vision-Language Models

Researchers propose a theoretical framework for identifying when layer skipping in vision-language models reduces computational costs without sacrificing performance. The work establishes experimentally verifiable redundancy conditions that unify and improve upon existing pruning heuristics, confirming that early and late vision tokens contain significant redundancies across models.

AIBullisharXiv – CS AI · 1d ago6/10
🧠

Gradient Extrapolation-Based Policy Optimization

Researchers propose GXPO, a new policy optimization technique for reinforcement learning that approximates multi-step lookahead using only three backward passes instead of many, improving large language model reasoning performance by 1.65-5.00 points over standard GRPO while achieving up to 4x step speedup.

🧠 Llama
AINeutralarXiv – CS AI · 1d ago6/10
🧠

Active teacher selection for reward learning

Researchers introduce the Hidden Utility Bandit (HUB) framework to address a critical limitation in reward learning systems: their reliance on feedback from a single idealized teacher. The framework models teacher heterogeneity in rationality, expertise, and cost, enabling Active Teacher Selection (ATS) algorithms that strategically choose which teachers to query, demonstrating superior performance in paper recommendation and vaccine testing applications.

AIBullisharXiv – CS AI · 1d ago6/10
🧠

Learning to Pose Problems: Reasoning-Driven and Solver-Adaptive Data Synthesis

Researchers developed a novel framework for synthesizing training data that enables reasoning models to generate high-quality mathematical and reasoning problems by explicitly planning problem directions and adapting difficulty to solver capabilities. The approach achieved a 3.4% cumulative improvement across 10 benchmarks, demonstrating scalable alternatives to manual dataset curation.

← PrevPage 400 of 1342Next →
Filters
Sentiment
Importance
Sort
Stay Updated
Everything combined