y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto
🤖All31,653🧠AI13,486⛓️Crypto11,471💎DeFi1,180🤖AI × Crypto583📰General4,933

AI × Crypto News Feed

Real-time AI-curated news from 31,653+ articles across 50+ sources. Sentiment analysis, importance scoring, and key takeaways — updated every 15 minutes.

31653 articles
AIBullisharXiv – CS AI · Mar 37/103
🧠

A cross-species neural foundation model for end-to-end speech decoding

Researchers developed a new Brain-to-Text (BIT) framework that uses cross-species neural foundation models to decode speech from brain activity with significantly improved accuracy. The system reduces word error rates from 24.69% to 10.22% compared to previous methods and enables seamless translation of both attempted and imagined speech into text.

AINeutralarXiv – CS AI · Mar 37/103
🧠

InnoGym: Benchmarking the Innovation Potential of AI Agents

Researchers introduce InnoGym, the first benchmark designed to evaluate AI agents' innovation potential rather than just correctness. The framework measures both performance gains and methodological novelty across 18 real-world engineering and scientific tasks, revealing that while AI agents can generate novel approaches, they lack robustness for significant performance improvements.

AIBullisharXiv – CS AI · Mar 37/103
🧠

Cache What Lasts: Token Retention for Memory-Bounded KV Cache in LLMs

Researchers propose TRIM-KV, a novel approach that learns token importance for memory-bounded LLM inference through lightweight retention gates, addressing the quadratic cost of self-attention and growing key-value cache issues. The method outperforms existing eviction baselines across multiple benchmarks and provides insights into LLM interpretability through learned retention scores.

AINeutralarXiv – CS AI · Mar 37/103
🧠

Towards Transferable Defense Against Malicious Image Edits

Researchers propose TDAE, a new defense framework that protects images from malicious AI-powered edits by using imperceptible perturbations and coordinated image-text optimization. The system employs FlatGrad Defense Mechanism for visual protection and Dynamic Prompt Defense for textual enhancement, achieving better cross-model transferability than existing methods.

AIBullisharXiv – CS AI · Mar 37/102
🧠

RMAAT: Astrocyte-Inspired Memory Compression and Replay for Efficient Long-Context Transformers

Researchers introduce RMAAT (Recurrent Memory Augmented Astromorphic Transformer), a new architecture inspired by brain astrocyte cells that addresses the quadratic complexity problem in Transformer models for long sequences. The system uses recurrent memory tokens and adaptive compression to achieve linear complexity while maintaining competitive accuracy on benchmark tests.

AIBullisharXiv – CS AI · Mar 37/104
🧠

AgentOCR: Reimagining Agent History via Optical Self-Compression

Researchers introduce AgentOCR, a framework that converts AI agent interaction histories from text to compressed visual format, reducing token usage by over 50% while maintaining 95% performance. The system uses visual caching and adaptive compression to address memory bottlenecks in large language model deployments.

AIBullisharXiv – CS AI · Mar 37/104
🧠

A Learnable Wavelet Transformer for Long-Short Equity Trading and Risk-Adjusted Return Optimization

Researchers developed WaveLSFormer, a wavelet-based Transformer model that directly generates market-neutral long/short trading portfolios from financial time series data. The AI system achieved a 60.7% cumulative return and 2.16 Sharpe ratio across six industry groups, significantly outperforming traditional ML models like LSTM and standard Transformers.

AIBullisharXiv – CS AI · Mar 37/102
🧠

ButterflyMoE: Sub-Linear Ternary Experts via Structured Butterfly Orbits

ButterflyMoE introduces a breakthrough approach to reduce memory requirements for AI expert models by 150× through geometric parameterization instead of storing independent weight matrices. The method uses shared ternary prototypes with learned rotations to achieve sub-linear memory scaling, enabling deployment of multiple experts on edge devices.

AIBullisharXiv – CS AI · Mar 37/104
🧠

HalluGuard: Demystifying Data-Driven and Reasoning-Driven Hallucinations in LLMs

Researchers introduce HalluGuard, a new framework that identifies and addresses both data-driven and reasoning-driven hallucinations in Large Language Models. The system achieved state-of-the-art performance across 10 benchmarks and 9 LLM backbones, offering a unified approach to improve AI reliability in critical domains like healthcare and law.

AINeutralarXiv – CS AI · Mar 37/103
🧠

Reward Models Inherit Value Biases from Pretraining

A comprehensive study of 10 leading reward models reveals they inherit significant value biases from their base language models, with Llama-based models preferring 'agency' values while Gemma-based models favor 'communion' values. This bias persists even when using identical preference data and training processes, suggesting that the choice of base model fundamentally shapes AI alignment outcomes.

AINeutralarXiv – CS AI · Mar 37/102
🧠

Learn-to-Distance: Distance Learning for Detecting LLM-Generated Text

Researchers developed a new algorithm called Learn-to-Distance (L2D) that can detect AI-generated text from models like GPT, Claude, and Gemini with significantly improved accuracy. The method uses adaptive distance learning between original and rewritten text, achieving 54.3% to 75.4% relative improvements over existing detection methods across extensive testing.

AIBearisharXiv – CS AI · Mar 37/104
🧠

Stealthy Poisoning Attacks Bypass Defenses in Regression Settings

Researchers have developed new stealthy poisoning attacks that can bypass current defenses in regression models used across industrial and scientific applications. The study introduces BayesClean, a novel defense mechanism that better protects against these sophisticated attacks when poisoning attempts are significant.

AIBullisharXiv – CS AI · Mar 37/104
🧠

Beyond Single-Modal Analytics: A Framework for Integrating Heterogeneous LLM-Based Query Systems for Multi-Modal Data

Researchers introduce Meta Engine, a unified semantic query system that integrates multiple specialized LLM-based query systems to handle multi-modal data analysis. The system addresses fragmentation in current semantic query tools by combining specialized systems through five key components, achieving 3-24x better performance than existing baselines.

AIBullisharXiv – CS AI · Mar 37/103
🧠

WAXAL: A Large-Scale Multilingual African Language Speech Corpus

Researchers have released WAXAL, a large-scale multilingual speech dataset covering 24 Sub-Saharan African languages representing over 100 million speakers. The dataset includes 1,250 hours of transcribed speech for ASR and 235 hours of high-quality recordings for TTS, released under CC-BY-4.0 license to advance inclusive AI technologies.

AIBullisharXiv – CS AI · Mar 37/103
🧠

CSRv2: Unlocking Ultra-Sparse Embeddings

CSRv2 introduces a new training approach for ultra-sparse embeddings that reduces inactive neurons from 80% to 20% while delivering 14% accuracy gains. The method achieves 7x speedup over existing approaches and up to 300x improvements in compute and memory efficiency compared to dense embeddings.

AINeutralarXiv – CS AI · Mar 37/104
🧠

Trojans in Artificial Intelligence (TrojAI) Final Report

IARPA's TrojAI program investigated AI Trojans - malicious backdoors hidden in AI models that can cause system failures or allow unauthorized control. The multi-year initiative developed detection methods through weight analysis and trigger inversion, while identifying ongoing challenges in AI security that require continued research.

AIBullisharXiv – CS AI · Mar 37/103
🧠

AceGRPO: Adaptive Curriculum Enhanced Group Relative Policy Optimization for Autonomous Machine Learning Engineering

Researchers introduce AceGRPO, a new reinforcement learning framework for Autonomous Machine Learning Engineering that addresses behavioral stagnation in current LLM-based agents. The Ace-30B model trained with this method achieves 100% valid submission rate on MLE-Bench-Lite and matches performance of proprietary frontier models while outperforming larger open-source alternatives.

AIBullisharXiv – CS AI · Mar 37/102
🧠

MiniCPM-SALA: Hybridizing Sparse and Linear Attention for Efficient Long-Context Modeling

MiniCPM-SALA introduces a 9B-parameter hybrid language model architecture that combines sparse and linear attention mechanisms to handle ultra-long contexts up to 1M tokens. The model achieves 3.5x faster inference than full-attention models while reducing training costs by 75% through a continual training framework that transforms existing Transformer models.

AIBullisharXiv – CS AI · Mar 37/105
🧠

Elo-Evolve: A Co-evolutionary Framework for Language Model Alignment

Researchers introduce Elo-Evolve, a new framework for training AI language models using dynamic multi-agent competition instead of static reward functions. The method achieves 4.5x noise reduction and demonstrates superior performance compared to traditional alignment approaches when tested on Qwen2.5-7B models.

AIBullisharXiv – CS AI · Mar 37/103
🧠

OmniCT: Towards a Unified Slice-Volume LVLM for Comprehensive CT Analysis

Researchers have developed OmniCT, a unified AI model that combines slice-level and volumetric analysis for CT scan interpretation, addressing a major limitation in medical imaging AI. The model introduces spatial consistency enhancement and organ-level semantic features, outperforming existing methods across clinical tasks.

AIBullisharXiv – CS AI · Mar 37/104
🧠

Stable Asynchrony: Variance-Controlled Off-Policy RL for LLMs

MIT researchers introduce VCPO (Variance Controlled Policy Optimization), a new method that improves asynchronous reinforcement learning for LLM training by addressing high variance issues in off-policy settings. The technique dynamically scales learning rates and applies variance control to achieve stable training with 2.5x speedup while maintaining performance.

AINeutralarXiv – CS AI · Mar 37/105
🧠

Agentic Unlearning: When LLM Agent Meets Machine Unlearning

Researchers introduce 'agentic unlearning' through Synchronized Backflow Unlearning (SBU), a framework that removes sensitive information from both AI model parameters and persistent memory systems. The method addresses critical gaps in existing unlearning techniques by preventing cross-pathway recontamination between memory and parameters.

← PrevPage 309 of 1267Next →
Filters
Sentiment
Importance
Sort
Stay Updated
Everything combined