y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto
🤖All31,565🧠AI13,460⛓️Crypto11,429💎DeFi1,175🤖AI × Crypto576📰General4,925

AI × Crypto News Feed

Real-time AI-curated news from 31,565+ articles across 50+ sources. Sentiment analysis, importance scoring, and key takeaways — updated every 15 minutes.

31565 articles
AINeutralarXiv – CS AI · Mar 57/10
🧠

Function Induction and Task Generalization: An Interpretability Study with Off-by-One Addition

Researchers studied how large language models generalize to new tasks through "off-by-one addition" experiments, discovering a "function induction" mechanism that operates at higher abstraction levels than previously known induction heads. The study reveals that multiple attention heads work in parallel to enable task-level generalization, with this mechanism being reusable across various synthetic and algorithmic tasks.

AIBullisharXiv – CS AI · Mar 57/10
🧠

VITA: Vision-to-Action Flow Matching Policy

Researchers developed VITA, a new AI framework that streamlines robot policy learning by directly flowing from visual inputs to actions without requiring conditioning modules. The system achieves 1.5-2x faster inference speeds while maintaining or improving performance compared to existing methods across 14 simulation and real-world robotic tasks.

AINeutralarXiv – CS AI · Mar 56/10
🧠

WebDS: An End-to-End Benchmark for Web-based Data Science

Researchers introduce WebDS, a new benchmark for evaluating AI agents on real-world web-based data science tasks across 870 scenarios and 29 websites. Current state-of-the-art LLM agents achieve only 15% success rates compared to 90% human accuracy, revealing significant gaps in AI capabilities for complex data workflows.

AINeutralarXiv – CS AI · Mar 57/10
🧠

ERDES: A Benchmark Video Dataset for Retinal Detachment and Macular Status Classification in Ocular Ultrasound

Researchers have released ERDES, the first open-access dataset of ocular ultrasound videos for detecting retinal detachment and macular status using machine learning. The dataset addresses a critical gap in automated medical diagnosis by enabling AI models to classify retinal detachment severity, which is essential for determining surgical urgency.

AINeutralarXiv – CS AI · Mar 57/10
🧠

Effective Sample Size and Generalization Bounds for Temporal Networks

Researchers propose a new evaluation methodology for temporal deep learning that controls for effective sample size rather than raw sequence length. Their analysis of Temporal Convolutional Networks on time series data shows that stronger temporal dependence can actually improve generalization when properly evaluated, contradicting results from standard evaluation methods.

AINeutralarXiv – CS AI · Mar 57/10
🧠

Adaptive Quantized Planetary Crater Detection System for Autonomous Space Exploration

Researchers propose an Adaptive Quantized Planetary Crater Detection System (AQ-PCDSys) that uses quantized neural networks and multi-sensor fusion to enable real-time AI-powered crater detection on resource-constrained space exploration hardware. The system addresses the critical bottleneck of deploying sophisticated deep learning models on power-limited, radiation-hardened space computers.

AINeutralarXiv – CS AI · Mar 57/10
🧠

A Geometric Perspective on the Difficulties of Learning GNN-based SAT Solvers

Researchers explain why Graph Neural Networks (GNNs) struggle with complex Boolean Satisfiability Problems (SATs) through geometric analysis using graph Ricci Curvature. They prove that harder SAT instances have more negative curvature, creating connectivity bottlenecks that prevent GNNs from effectively processing long-range dependencies.

AIBullisharXiv – CS AI · Mar 56/10
🧠

Index-Preserving Lightweight Token Pruning for Efficient Document Understanding in Vision-Language Models

Researchers have developed a lightweight token pruning framework that reduces computational costs for vision-language models in document understanding tasks by filtering out non-informative background regions before processing. The approach uses a binary patch-level classifier and max-pooling refinement to maintain accuracy while substantially lowering compute demands.

AIBullisharXiv – CS AI · Mar 57/10
🧠

An LLM Agentic Approach for Legal-Critical Software: A Case Study for Tax Prep Software

Researchers developed a multi-agent LLM system that translates legal statutes into executable software, using U.S. tax preparation as a test case. The system achieved a 45% success rate using GPT-4o-mini, significantly outperforming larger frontier models like GPT-4o and Claude 3.5 which only achieved 9-15% success rates on complex tax code tasks.

🧠 GPT-4🧠 Claude
AIBullisharXiv – CS AI · Mar 57/10
🧠

Bridging Computational Social Science and Deep Learning: Cultural Dissemination-Inspired Graph Neural Networks

Researchers introduce AxelGNN, a new Graph Neural Network architecture inspired by cultural dissemination theory that addresses key limitations of existing GNNs including oversmoothing and poor handling of heterogeneous relationships. The model demonstrates superior performance in node classification and influence estimation while maintaining computational efficiency across both homophilic and heterophilic graphs.

AIBullisharXiv – CS AI · Mar 56/10
🧠

Uni-NTFM: A Unified Foundation Model for EEG Signal Representation Learning

Researchers developed Uni-NTFM, a new foundation model for EEG signal analysis that incorporates biological neural mechanisms and achieved record-breaking 1.9 billion parameters. The model was pre-trained on 28,000 hours of EEG data and outperformed existing models across nine downstream tasks by aligning architecture with actual brain functionality.

AINeutralarXiv – CS AI · Mar 56/10
🧠

Towards Personalized Deep Research: Benchmarks and Evaluations

Researchers introduce PDR-Bench, the first benchmark for evaluating personalization in Deep Research Agents (DRAs), featuring 250 realistic user-task queries across 10 domains. The benchmark uses a new PQR Evaluation Framework to measure personalization alignment, content quality, and factual reliability in AI research assistants.

AIBullisharXiv – CS AI · Mar 57/10
🧠

Vision-Zero: Scalable VLM Self-Improvement via Strategic Gamified Self-Play

Researchers introduce Vision-Zero, a self-improving AI framework that trains vision-language models through competitive games without requiring human-labeled data. The system uses strategic self-play and can work with arbitrary images, achieving state-of-the-art performance on reasoning and visual understanding tasks while reducing training costs.

AIBullisharXiv – CS AI · Mar 56/10
🧠

Training-Free Reward-Guided Image Editing via Trajectory Optimal Control

Researchers have developed a new training-free framework for reward-guided image editing using diffusion models. The approach treats image editing as a trajectory optimal control problem, allowing for better preservation of source image content while enhancing target rewards compared to existing methods.

AIBullisharXiv – CS AI · Mar 57/10
🧠

ELMUR: External Layer Memory with Update/Rewrite for Long-Horizon RL Problems

Researchers developed ELMUR, a new AI architecture that uses external memory to help robots make better decisions over extremely long time periods. The system achieved 100% success on tasks requiring memory of up to one million steps and nearly doubled performance on robotic manipulation tasks compared to existing methods.

AIBullisharXiv – CS AI · Mar 57/10
🧠

TIGeR: Tool-Integrated Geometric Reasoning in Vision-Language Models for Robotics

Researchers have developed TIGeR, a framework that enhances Vision-Language Models with precise geometric reasoning capabilities for robotics applications. The system enables VLMs to execute centimeter-level accurate computations by integrating external computational tools, moving beyond qualitative spatial reasoning to quantitative precision required for real-world robotic manipulation.

AIBullisharXiv – CS AI · Mar 57/10
🧠

AMiD: Knowledge Distillation for LLMs with $\alpha$-mixture Assistant Distribution

Researchers from KAIST propose AMiD, a new knowledge distillation framework that improves the efficiency of training smaller language models by transferring knowledge from larger models. The technique introduces α-mixture assistant distribution to address training instability and capacity gaps in existing approaches.

AIBullisharXiv – CS AI · Mar 57/10
🧠

Kaleido: Open-Sourced Multi-Subject Reference Video Generation Model

Researchers have introduced Kaleido, an open-source AI model for generating consistent videos from multiple reference images of subjects. The framework addresses key limitations in subject-to-video generation through improved data construction and a novel Reference Rotary Positional Encoding technique.

AIBullisharXiv – CS AI · Mar 57/10
🧠

Agent Data Protocol: Unifying Datasets for Diverse, Effective Fine-tuning of LLM Agents

Researchers introduce Agent Data Protocol (ADP), a standardized format for unifying diverse AI agent training datasets across different formats and tools. The protocol enabled training on 13 unified datasets, achieving ~20% performance gains over base models and state-of-the-art results on coding, browsing, and tool use benchmarks.

AIBullisharXiv – CS AI · Mar 56/10
🧠

Multimodal Large Language Models for Low-Resource Languages: A Case Study for Basque

Researchers successfully developed multimodal large language models for Basque, a low-resource language, finding that only 20% Basque training data is needed for solid performance. The study demonstrates that specialized Basque language backbones aren't required, potentially enabling MLLM development for other underrepresented languages.

🧠 Llama
AIBullisharXiv – CS AI · Mar 56/10
🧠

Agile Flight Emerges from Multi-Agent Competitive Racing

Researchers demonstrate that multi-agent competitive training enables AI agents to develop agile flight capabilities and strategic behaviors that outperform traditional single-agent training methods. The approach shows superior sim-to-real transfer and generalization when applied to drone racing scenarios with complex environments and obstacles.

AINeutralarXiv – CS AI · Mar 57/10
🧠

A Systematic Analysis of Biases in Large Language Models

A comprehensive study analyzed four major large language models (LLMs) across political, ideological, alliance, language, and gender dimensions, revealing persistent biases despite efforts to make them neutral. The research used various experimental methods including news summarization, stance classification, UN voting patterns, multilingual tasks, and survey responses to uncover these systematic biases.

← PrevPage 279 of 1263Next →
Filters
Sentiment
Importance
Sort
Stay Updated
Everything combined