y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto
🤖All30,724🧠AI13,003⛓️Crypto11,163💎DeFi1,151🤖AI × Crypto566📰General4,841

AI × Crypto News Feed

Real-time AI-curated news from 30,727+ articles across 50+ sources. Sentiment analysis, importance scoring, and key takeaways — updated every 15 minutes.

30727 articles
AIBearisharXiv – CS AI · Mar 117/10
🧠

When Robots Obey the Patch: Universal Transferable Patch Attacks on Vision-Language-Action Models

Researchers have developed UPA-RFAS, a new adversarial attack framework that can successfully fool Vision-Language-Action (VLA) models used in robotics with universal physical patches that transfer across different models and real-world scenarios. The attack exploits vulnerabilities in AI-powered robots by using patches that can hijack attention mechanisms and cause semantic misalignment between visual and text inputs.

AIBullisharXiv – CS AI · Mar 117/10
🧠

SATURN: SAT-based Reinforcement Learning to Unleash LLMs Reasoning

Researchers introduce SATURN, a new reinforcement learning framework that uses Boolean Satisfiability (SAT) problems to improve large language models' reasoning capabilities. The framework addresses key limitations in existing RL approaches by enabling scalable task construction, automated verification, and precise difficulty control through curriculum learning.

AIBullisharXiv – CS AI · Mar 117/10
🧠

Reasoning Efficiently Through Adaptive Chain-of-Thought Compression: A Self-Optimizing Framework

Researchers propose SEER (Self-Enhancing Efficient Reasoning), a framework that compresses Chain-of-Thought reasoning in Large Language Models while maintaining accuracy. The study found that longer reasoning chains don't always improve performance and can increase latency by up to 5x, leading to a 42.1% reduction in CoT length while improving accuracy.

AIBullisharXiv – CS AI · Mar 117/10
🧠

Stepwise Guided Policy Optimization: Coloring your Incorrect Reasoning in GRPO

Researchers introduce Stepwise Guided Policy Optimization (SGPO), a new framework that improves upon Group Relative Policy Optimization (GRPO) by learning from incorrect reasoning responses in large language model training. SGPO addresses the limitation where GRPO fails to update policies when all responses in a group are incorrect, showing improved performance across multiple model sizes and reasoning benchmarks.

AIBullisharXiv – CS AI · Mar 117/10
🧠

Small Language Models for Efficient Agentic Tool Calling: Outperforming Large Models with Targeted Fine-tuning

Researchers demonstrated that a fine-tuned small language model (SLM) with 350M parameters can significantly outperform large language models like ChatGPT in tool-calling tasks, achieving a 77.55% pass rate versus ChatGPT's 26%. This breakthrough suggests organizations can reduce AI operational costs while maintaining or improving performance through targeted fine-tuning of smaller models.

🏢 Meta🏢 Hugging Face🧠 ChatGPT
AIBullisharXiv – CS AI · Mar 117/10
🧠

Robust Training of Neural Networks at Arbitrary Precision and Sparsity

Researchers have developed a new framework for training neural networks at ultra-low precision and high sparsity by modeling quantization as additive noise rather than using traditional Straight-Through Estimators. The method enables stable training of A1W1 and sub-1-bit networks, achieving state-of-the-art results for highly efficient neural networks including modern LLMs.

AIBullisharXiv – CS AI · Mar 117/10
🧠

From Self-Evolving Synthetic Data to Verifiable-Reward RL: Post-Training Multi-turn Interactive Tool-Using Agents

Researchers developed EigenData, a framework combining self-evolving synthetic data generation with reinforcement learning to train AI agents for multi-turn tool usage and dialogue. The system achieved 73% success on Airline tasks and 98.3% on Telecom benchmarks, matching frontier models while eliminating the need for expensive human annotation.

AIBullisharXiv – CS AI · Mar 117/10
🧠

MMGraphRAG: Bridging Vision and Language with Interpretable Multimodal Knowledge Graphs

Researchers introduce MMGraphRAG, a new AI framework that addresses hallucination issues in large language models by integrating visual scene graphs with text knowledge graphs through cross-modal fusion. The system uses SpecLink for entity linking and demonstrates superior performance in multimodal information processing across multiple benchmarks.

AIBullisharXiv – CS AI · Mar 117/10
🧠

Periodic Asynchrony: An On-Policy Approach for Accelerating LLM Reinforcement Learning

Researchers propose a new asynchronous framework for LLM reinforcement learning that separates inference and training deployment, achieving 3-5x improvement in training throughput. The approach maintains on-policy correctness while enabling concurrent inference and training through a producer-consumer pipeline architecture.

AINeutralarXiv – CS AI · Mar 117/10
🧠

From Data Statistics to Feature Geometry: How Correlations Shape Superposition

Researchers introduce Bag-of-Words Superposition (BOWS) to study how neural networks arrange features in superposition when using realistic correlated data. The study reveals that interference between features can be constructive rather than just noise, leading to semantic clusters and cyclical structures observed in language models.

AIBullisharXiv – CS AI · Mar 117/10
🧠

Efficiently Aligning Draft Models via Parameter- and Data-Efficient Adaptation

Researchers introduce Efficient Draft Adaptation (EDA), a framework that significantly reduces the cost of adapting draft models for speculative decoding when target LLMs are fine-tuned. EDA achieves superior performance through decoupled architecture, data regeneration, and smart sample selection while requiring substantially less training resources than full retraining.

AIBullisharXiv – CS AI · Mar 117/10
🧠

Democratising Clinical AI through Dataset Condensation for Classical Clinical Models

Researchers have developed a new framework that enables dataset condensation for non-differentiable clinical AI models like decision trees and Cox regression, using differential privacy to create synthetic medical datasets. This breakthrough allows healthcare institutions to share condensed synthetic data while preserving patient privacy and maintaining model utility across classification and survival prediction tasks.

AIBullisharXiv – CS AI · Mar 117/10
🧠

A Variational Latent Equilibrium for Learning in Cortex

Researchers propose a new biologically plausible framework for approximating backpropagation through time (BPTT) in neural networks that mimics how the brain learns spatiotemporal patterns. The approach uses energy conservation principles to create local, time-continuous learning equations that could enable more brain-like AI systems and physical neural computing circuits.

AIBullisharXiv – CS AI · Mar 117/10
🧠

The Missing Memory Hierarchy: Demand Paging for LLM Context Windows

Researchers developed Pichay, a demand paging system that treats LLM context windows like computer memory with hierarchical caching. The system reduces context consumption by up to 93% in production by evicting stale content and managing memory more efficiently, addressing fundamental scalability issues in AI systems.

AIBullisharXiv – CS AI · Mar 117/10
🧠

PlayWorld: Learning Robot World Models from Autonomous Play

PlayWorld introduces a breakthrough AI system that trains robot world simulators entirely from autonomous robot self-play, eliminating the need for human demonstrations. The system achieves 40% improvements in failure prediction and 65% policy performance gains when deployed in real-world scenarios.

AIBullisharXiv – CS AI · Mar 117/10
🧠

DendroNN: Dendrocentric Neural Networks for Energy-Efficient Classification of Event-Based Data

Researchers have developed DendroNN, a novel neural network architecture inspired by brain dendrites that achieves up to 4x higher energy efficiency than current neuromorphic hardware for spatiotemporal event-based computing. The system uses spike sequence detection and a unique rewiring training method to process temporal data without requiring gradients or recurrent connections.

AIBearisharXiv – CS AI · Mar 117/10
🧠

Security Considerations for Multi-agent Systems

A comprehensive study reveals that multi-agent AI systems (MAS) face distinct security vulnerabilities that existing frameworks inadequately address. The research evaluated 16 AI security frameworks against 193 identified threats across 9 categories, finding that no framework achieves majority coverage in any single category, with non-determinism and data leakage being the most under-addressed areas.

AIBullisharXiv – CS AI · Mar 117/10
🧠

Large Language Model-Assisted Superconducting Qubit Experiments

Researchers have developed a framework that uses large language models (LLMs) to automate superconducting qubit experiments, potentially streamlining quantum computing research. The system successfully demonstrated autonomous resonator characterization and quantum non-demolition measurements, offering a more user-friendly approach to controlling complex quantum hardware.

AIBullisharXiv – CS AI · Mar 117/10
🧠

Unveiling the Potential of Quantization with MXFP4: Strategies for Quantization Error Reduction

Researchers have developed two software techniques (OAS and MBS) that dramatically improve MXFP4 quantization accuracy for Large Language Models, reducing the performance gap with NVIDIA's NVFP4 from 10% to below 1%. This breakthrough makes MXFP4 a viable alternative while maintaining 12% hardware efficiency advantages in tensor cores.

🏢 Nvidia
AIBullisharXiv – CS AI · Mar 117/10
🧠

BiCLIP: Domain Canonicalization via Structured Geometric Transformation

Researchers introduce BiCLIP, a new framework that improves vision-language models' ability to adapt to specialized domains through geometric transformations. The approach achieves state-of-the-art results across 11 benchmarks while maintaining simplicity and low computational requirements.

← PrevPage 240 of 1230Next →
Filters
Sentiment
Importance
Sort
Stay Updated
Everything combined