y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto
🤖All31,652🧠AI13,486⛓️Crypto11,470💎DeFi1,180🤖AI × Crypto583📰General4,933

AI × Crypto News Feed

Real-time AI-curated news from 31,652+ articles across 50+ sources. Sentiment analysis, importance scoring, and key takeaways — updated every 15 minutes.

31652 articles
AINeutralarXiv – CS AI · Mar 37/104
🧠

Optimal Sparsity of Mixture-of-Experts Language Models for Reasoning Tasks

Researchers analyzed Mixture-of-Experts (MoE) language models to determine optimal sparsity levels for different tasks. They found that reasoning tasks require balancing active compute (FLOPs) with optimal data-to-parameter ratios, while memorization tasks benefit from more parameters regardless of sparsity.

AIBullisharXiv – CS AI · Mar 37/104
🧠

BinaryShield: Cross-Service Threat Intelligence in LLM Services using Privacy-Preserving Fingerprints

BinaryShield is the first privacy-preserving threat intelligence system that enables secure sharing of attack fingerprints across compliance boundaries for LLM services. The system addresses the critical security gap where organizations cannot share prompt injection attack intelligence between services due to privacy regulations, achieving an F1-score of 0.94 while providing 38x faster similarity search than dense embeddings.

AIBearisharXiv – CS AI · Mar 37/103
🧠

ERIS: Evolutionary Real-world Interference Scheme for Jailbreaking Audio Large Models

Researchers developed ERIS, a new framework that uses genetic algorithms to exploit Audio Large Models (ALMs) by disguising malicious instructions as natural speech with background noise. The system can bypass safety filters by embedding harmful content in real-world audio interference that appears harmless to humans and security systems.

AIBullisharXiv – CS AI · Mar 37/103
🧠

Beyond Frame-wise Tracking: A Trajectory-based Paradigm for Efficient Point Cloud Tracking

Researchers have developed TrajTrack, a new AI framework for 3D object tracking in LiDAR systems that achieves state-of-the-art performance while running at 55 FPS. The system improves tracking precision by 3.02% over existing methods by using historical trajectory data rather than computationally expensive multi-frame point cloud processing.

AIBullisharXiv – CS AI · Mar 37/104
🧠

BWCache: Accelerating Video Diffusion Transformers through Block-Wise Caching

Researchers have developed BWCache, a training-free method that accelerates Diffusion Transformer (DiT) video generation by up to 6× through block-wise feature caching and reuse. The technique exploits computational redundancy in DiT blocks across timesteps while maintaining visual quality, addressing a key bottleneck in real-world AI video generation applications.

AIBullisharXiv – CS AI · Mar 37/104
🧠

Distribution-Aligned Decoding for Efficient LLM Task Adaptation

Researchers introduce SVDecode, a new method for adapting large language models to specific tasks without extensive fine-tuning. The technique uses steering vectors during decoding to align output distributions with task requirements, improving accuracy by up to 5 percentage points while adding minimal computational overhead.

AIBullisharXiv – CS AI · Mar 37/103
🧠

Advancing Universal Deep Learning for Electronic-Structure Hamiltonian Prediction of Materials

Researchers developed NextHAM, a deep learning method for predicting electronic-structure Hamiltonians of materials, offering significant computational efficiency advantages over traditional DFT methods. The system introduces neural E(3)-symmetry architecture and a new dataset Materials-HAM-SOC with 17,000 material structures spanning 68 elements.

AIBullisharXiv – CS AI · Mar 37/104
🧠

Bridging Draft Policy Misalignment: Group Tree Optimization for Speculative Decoding

Researchers introduce Group Tree Optimization (GTO), a new training method that improves speculative decoding for large language models by aligning draft model training with actual decoding policies. GTO achieves 7.4% better acceptance length and 7.7% additional speedup over existing state-of-the-art methods across multiple benchmarks and LLMs.

AIBullisharXiv – CS AI · Mar 37/104
🧠

HEAPr: Hessian-based Efficient Atomic Expert Pruning in Output Space

Researchers introduce HEAPr, a novel pruning algorithm for Mixture-of-Experts (MoE) language models that decomposes experts into atomic components for more precise pruning. The method achieves nearly lossless compression at 20-25% pruning ratios while reducing computational costs by approximately 20%.

AIBullisharXiv – CS AI · Mar 37/104
🧠

Train Once, Answer All: Many Pretraining Experiments for the Cost of One

Researchers developed a method to conduct multiple AI training experiments simultaneously within a single pretraining run, reducing computational costs while maintaining research validity. The approach was validated across ten experiments using models up to 2.7B parameters trained on 210B tokens, with minimal impact on training dynamics.

AIBullisharXiv – CS AI · Mar 37/105
🧠

Vid-LLM: A Compact Video-based 3D Multimodal LLM with Reconstruction-Reasoning Synergy

Researchers propose Vid-LLM, a new video-based 3D multimodal large language model that processes video inputs without requiring external 3D data for scene understanding. The model uses a Cross-Task Adapter module and Metric Depth Model to integrate geometric cues and maintain consistency across 3D tasks like question answering and visual grounding.

AIBullisharXiv – CS AI · Mar 37/104
🧠

Scaling with Collapse: Efficient and Predictable Training of LLM Families

Researchers demonstrate that training loss curves for large language models can collapse onto universal trajectories when hyperparameters are optimally set, enabling more efficient LLM training. They introduce Celerity, a competitive LLM family developed using these insights, and show that deviation from collapse can serve as an early diagnostic for training issues.

AINeutralarXiv – CS AI · Mar 37/104
🧠

Barriers for Learning in an Evolving World: Mathematical Understanding of Loss of Plasticity

Researchers have identified the mathematical mechanisms behind 'loss of plasticity' (LoP), explaining why deep learning models struggle to continue learning in changing environments. The study reveals that properties promoting generalization in static settings actually hinder continual learning by creating parameter space traps.

AIBullisharXiv – CS AI · Mar 37/103
🧠

Stabilizing Policy Gradients for Sample-Efficient Reinforcement Learning in LLM Reasoning

Researchers have developed Curvature-Aware Policy Optimization (CAPO), a new algorithm that improves training stability and sample efficiency for Large Language Models by up to 30x. The method uses advanced mathematical optimization techniques to identify and filter problematic training samples, requiring intervention on fewer than 8% of tokens.

AIBullisharXiv – CS AI · Mar 37/103
🧠

GEM: A Gym for Agentic LLMs

Researchers introduced GEM (General Experience Maker), an open-source environment simulator designed for training large language models through experience-based learning rather than static datasets. The framework provides a standardized interface similar to OpenAI-Gym but specifically optimized for LLMs, featuring diverse environments, integrated tools, and compatibility with popular RL training frameworks.

$MKR
AIBullisharXiv – CS AI · Mar 37/103
🧠

RLP: Reinforcement as a Pretraining Objective

Researchers introduce RLP (Reinforcement Learning Pretraining), a new training method that incorporates reinforcement learning exploration into the pretraining phase rather than only post-training. The approach treats chain-of-thought reasoning as exploratory actions and achieved 19% performance improvements on math and science benchmarks across different model architectures.

$COMP
AIBullisharXiv – CS AI · Mar 37/103
🧠

ExGRPO: Learning to Reason from Experience

Researchers introduce ExGRPO, a new framework that improves AI reasoning by reusing and prioritizing valuable training experiences based on correctness and entropy. The method shows consistent performance gains of +3.5-7.6 points over standard approaches across multiple model sizes while providing more stable training.

AIBearisharXiv – CS AI · Mar 37/103
🧠

On The Fragility of Benchmark Contamination Detection in Reasoning Models

New research reveals that benchmark contamination in language reasoning models (LRMs) is extremely difficult to detect, allowing developers to easily inflate performance scores on public leaderboards. The study shows that reinforcement learning methods like GRPO and PPO can effectively conceal contamination signals, undermining the integrity of AI model evaluations.

$NEAR
AIBearisharXiv – CS AI · Mar 37/103
🧠

Untargeted Jailbreak Attack

Researchers have developed a new 'untargeted jailbreak attack' (UJA) that can compromise AI safety systems in large language models with over 80% success rate using only 100 optimization iterations. This gradient-based attack method expands the search space by maximizing unsafety probability without fixed target responses, outperforming existing attacks by over 30%.

AIBullisharXiv – CS AI · Mar 37/105
🧠

Expressive Power of Implicit Models: Rich Equilibria and Test-Time Scaling

Researchers provide mathematical proof that implicit models can achieve greater expressive power through increased test-time computation, explaining how these memory-efficient architectures can match larger explicit networks. The study validates this scaling property across image reconstruction, scientific computing, operations research, and LLM reasoning domains.

AIBullisharXiv – CS AI · Mar 37/103
🧠

RACE Attention: A Strictly Linear-Time Attention for Long-Sequence Training

Researchers introduce RACE Attention, a new linear-time alternative to traditional Softmax Attention that can process up to 75 million tokens in a single pass, compared to current GPU-optimized implementations that fail beyond 4 million tokens. The technology uses angular similarity and Gaussian random projections to achieve dramatic efficiency gains while maintaining performance across language modeling and classification tasks.

← PrevPage 307 of 1267Next →
Filters
Sentiment
Importance
Sort
Stay Updated
Everything combined