y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto
🤖All35,460🧠AI14,870⛓️Crypto12,086💎DeFi1,240🤖AI × Crypto699📰General6,565

AI × Crypto News Feed

Real-time AI-curated news from 34,840+ articles across 50+ sources. Sentiment analysis, importance scoring, and key takeaways — updated every 15 minutes.

34840 articles
AINeutralarXiv – CS AI · 1d ago6/10
🧠

KARMA-MV: A Benchmark for Causal Question Answering on Music Videos

Researchers introduce KARMA-MV, a large-scale dataset of 37,737 multiple-choice questions derived from 2,682 YouTube music videos, designed to benchmark AI models' ability to reason about causal relationships between visual dynamics and musical structure. The dataset leverages LLM-based generation for scalability and proposes a causal knowledge graph approach to improve vision-language model performance on cross-modal audio-visual reasoning tasks.

AIBullisharXiv – CS AI · 1d ago6/10
🧠

CAMAL: Improving Attention Alignment and Faithfulness with Segmentation Masks

Researchers introduce CAMAL, a method that leverages segmentation masks to improve attention alignment and faithfulness in vision models across deep learning and reinforcement learning paradigms. The approach achieves over 35% improvements in attention faithfulness while maintaining or improving generalization performance without additional inference costs.

AINeutralarXiv – CS AI · 1d ago6/10
🧠

Why Retrying Fails: Context Contamination in LLM Agent Pipelines

Researchers introduce the Context-Contaminated Restart Model (CCRM) to formally analyze why LLM agents fail at higher rates when retrying tasks after errors, showing that failed attempts pollute the context window and increase subsequent error rates 7.1x. The model provides closed-form formulas for success probability, optimal pipeline depth allocation, and quantifies the exact benefit of clearing context before retry attempts.

AINeutralarXiv – CS AI · 1d ago6/10
🧠

PrepBench: How Far Are We from Natural-Language-Driven Data Preparation?

Researchers introduce PrepBench, a new benchmark for evaluating how well large language models can handle natural language-driven data preparation tasks. The benchmark reveals that despite recent LLM advances, current models still struggle significantly with translating user intent into executable data preparation workflows, particularly when handling ambiguous requirements and complex real-world datasets.

AINeutralarXiv – CS AI · 1d ago6/10
🧠

Attention-based graph neural networks: a survey

A comprehensive survey paper systematizes recent advances in attention-based graph neural networks (GNNs), proposing a two-level taxonomy spanning three developmental stages: graph recurrent attention networks, graph attention networks, and graph transformers. The work addresses a gap in literature by providing structured analysis of how attention mechanisms enhance GNNs' ability to learn discriminative features while filtering noise in graph-structured data.

AIBullisharXiv – CS AI · 1d ago6/10
🧠

SGC-RML: A reliable and interpretable longitudinal assessment for PD in real-world DNS

SGC-RML is a new AI framework that improves Parkinson's disease assessment by combining speech, gait, and wearable sensor data while providing reliability estimates and confidence measures. The model achieves strong predictive performance across multiple datasets and can reject uncertain assessments or recommend retesting, addressing critical gaps in real-world digital health monitoring.

AINeutralarXiv – CS AI · 1d ago5/10
🧠

Efficient Prompt Learning for Traffic Forecasting

Researchers propose SimpleST, a lightweight prompt tuning framework that enhances spatio-temporal graph neural networks' ability to generalize across different traffic prediction scenarios. By keeping pre-trained model parameters fixed while adapting through efficient prompting, the approach reduces computational overhead while improving accuracy on real-world urban datasets.

AINeutralarXiv – CS AI · 1d ago6/10
🧠

Reasoning-Aware Training for Time Series Forecasting

Researchers introduce STRIDE, a framework that integrates large language model reasoning into time series foundation models by projecting LLM reasoning into continuous embedding spaces rather than discrete tokens. The approach achieves state-of-the-art forecasting performance while providing interpretable reasoning, addressing the modality gap that previously limited combining LLMs with numerical time series data.

AINeutralarXiv – CS AI · 1d ago6/10
🧠

Privacy-Aware Video Anomaly Detection through Orthogonal Subspace Projection

Researchers propose Orthogonal Projection Layer (OPL), a privacy-preserving technique for video anomaly detection systems that removes facial attributes while maintaining detection accuracy. The approach uses weak supervision to suppress identifying information without adversarial training, introducing a new framework for evaluating privacy-utility tradeoffs in surveillance applications.

AIBullisharXiv – CS AI · 1d ago6/10
🧠

Lattice Deduction Transformers

Researchers introduce Lattice Deduction Transformers (LDT), a specialized neural architecture that achieves near-perfect accuracy on constraint-solving puzzles like Sudoku and Mazes while remaining logically sound. The approach demonstrates that smaller models with domain-specific architectures can outperform large language models on reasoning tasks.

AINeutralarXiv – CS AI · 1d ago6/10
🧠

Fitting Multilinear Polynomials for Logic Gate Networks

Researchers propose a novel approach to training learnable logic gate networks by representing 2-input Boolean gates as multilinear polynomials in 4-dimensional space, reducing a vector-quantization problem from 16 to 4 parameters per neuron. The CovJac method outperforms the baseline Soft-Mix approach, particularly at network depth, by addressing gradient starvation issues that cause performance collapse in deeper architectures.

AINeutralarXiv – CS AI · 1d ago6/10
🧠

Research on Security Enhancement Methods for Adversarial Robust Large Language Model Intelligent Agents for Medical Decision-Making Tasks

Researchers developed ARSM-Agent, a security-enhanced framework for medical decision-making AI systems that defends against adversarial attacks through multi-module validation. The system reduces attack success rates to 8.7% while maintaining 91% knowledge consistency, demonstrating significant improvements over existing baseline approaches.

AINeutralarXiv – CS AI · 1d ago6/10
🧠

Path-Coupled Bellman Flows for Distributional Reinforcement Learning

Researchers propose Path-Coupled Bellman Flows (PCBF), a novel distributional reinforcement learning method that addresses limitations in existing flow-based approaches by using source-consistent paths and shared noise coupling to improve training stability and return distribution fidelity. The approach demonstrates competitive performance on benchmark tasks while maintaining computational efficiency through variance-reduction techniques.

AINeutralarXiv – CS AI · 1d ago6/10
🧠

Sketch-and-Verify: Structured Inference-Time Scaling via Program Sketching

Sketch-and-Verify is an inference-time scaling technique that improves small language model performance by having the LLM generate multiple algorithmic strategies as program sketches, then filling and verifying them. On HumanEval+, this approach delivers superior cost-performance within a model tier compared to flat sampling, though upgrading to a stronger model tier remains more effective than scaling test-time compute on smaller models.

🧠 Gemini
AIBullisharXiv – CS AI · 1d ago6/10
🧠

A Robust Out-of-Distribution Detection Framework via Synergistic Smoothing

Researchers introduce ROSS, a robust out-of-distribution detection framework that combines median smoothing with instability quantification to defend machine learning systems against adversarial attacks. The method achieves state-of-the-art performance by leveraging the observation that OOD samples exhibit higher instability under perturbations, outperforming prior defenses by up to 40 AUROC points.

AINeutralarXiv – CS AI · 1d ago6/10
🧠

SLayerGen: a Crystal Generative Model for all Space and Layer Groups

SLayerGen introduces a generative AI model capable of creating crystal structures constrained to space and layer groups, addressing limitations in existing models that fail to account for diperiodic materials like 2D superconductors and thin film semiconductors. The model combines discrete autoregressive lattice generation, transformer-based sampling, and equivariant diffusion, achieving superior performance on layered material discovery while correcting mathematical inconsistencies in prior diffusion approaches.

AIBullisharXiv – CS AI · 1d ago6/10
🧠

TRAM: Training Approximate Multiplier Structures for Low-Power AI Accelerators

Researchers have developed TRAM, a technique that jointly optimizes low-power approximate multiplier structures with AI model training parameters, achieving up to 27% power reduction in vision transformers without significant accuracy loss. This approach differs from prior methods by integrating hardware design with model training rather than designing multipliers separately.

AIBullisharXiv – CS AI · 1d ago6/10
🧠

PromptDx: Differentiable Prompt Tuning for Multimodal In-Context Alzheimer's Diagnosis

Researchers introduce PromptDx, a novel AI framework that combines differentiable prompt tuning with multimodal learning to diagnose Alzheimer's Disease using MRI and biomarker data. The method achieves competitive performance using only 1% of context samples compared to 30% in standard approaches, demonstrating significant data efficiency gains for medical imaging applications.

AINeutralarXiv – CS AI · 1d ago6/10
🧠

Probing the Impact of Scale on Data-Efficient, Generalist Transformer World Models for Atari

Researchers demonstrate that transformer-based world models exhibit distinct scaling behaviors across Atari environments, with joint multi-task training stabilizing performance gains. The study reveals that individual environments respond differently to model scaling, but unified training across 26 Atari games ensures consistent improvements regardless of inherent task complexity.

← PrevPage 426 of 1394Next →
Filters
Sentiment
Importance
Sort
Stay Updated
Everything combined