y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto
🤖All30,456🧠AI12,926⛓️Crypto11,051💎DeFi1,136🤖AI × Crypto566📰General4,777

AI × Crypto News Feed

Real-time AI-curated news from 30,462+ articles across 50+ sources. Sentiment analysis, importance scoring, and key takeaways — updated every 15 minutes.

30462 articles
AINeutralarXiv – CS AI · Mar 167/10
🧠

The Economics of AI Supply Chain Regulation

A game-theoretic study analyzes how regulatory policies affect AI supply chains where foundation model providers serve downstream firms. The research finds that price competition policies work best with high compute costs, while quality competition policies always improve consumer surplus, offering guidance for effective AI market regulation.

AIBullisharXiv – CS AI · Mar 167/10
🧠

Spend Less, Reason Better: Budget-Aware Value Tree Search for LLM Agents

Researchers propose Budget-Aware Value Tree (BAVT), a training-free framework that improves LLM agent efficiency by intelligently managing computational resources during multi-hop reasoning tasks. The system outperforms traditional approaches while using 4x fewer resources, demonstrating that smart budget management beats brute-force compute scaling.

AIBearisharXiv – CS AI · Mar 167/10
🧠

Purify Once, Edit Freely: Breaking Image Protections under Model Mismatch

Researchers have identified a critical vulnerability in image protection systems that use adversarial perturbations to prevent unauthorized AI editing. Two new purification methods can effectively remove these protections, creating a 'purify-once, edit-freely' attack where images become vulnerable to unlimited manipulation.

AIBullisharXiv – CS AI · Mar 167/10
🧠

SRAM-Based Compute-in-Memory Accelerator for Linear-decay Spiking Neural Networks

Researchers developed an SRAM-based compute-in-memory accelerator for spiking neural networks that uses linear decay approximation instead of exponential decay, achieving 1.1x to 16.7x reduction in energy consumption. The innovation addresses the bottleneck of neuron state updates in neuromorphic computing by performing in-place decay directly within memory arrays.

AIBullisharXiv – CS AI · Mar 167/10
🧠

Efficient Reasoning with Balanced Thinking

Researchers propose ReBalance, a training-free framework that optimizes Large Reasoning Models by addressing overthinking and underthinking issues through confidence-based guidance. The solution dynamically adjusts reasoning trajectories without requiring model retraining, showing improved accuracy across multiple AI benchmarks.

AIBullisharXiv – CS AI · Mar 167/10
🧠

AI Model Modulation with Logits Redistribution

Researchers propose AIM, a novel AI model modulation paradigm that allows a single model to exhibit diverse behaviors without maintaining multiple specialized versions. The approach uses logits redistribution to enable dynamic control over output quality and input feature focus without requiring retraining or additional training data.

🧠 Llama
AIBearisharXiv – CS AI · Mar 167/10
🧠

Diagnosing Retrieval Bias Under Multiple In-Context Knowledge Updates in Large Language Models

Researchers identify a significant bias in Large Language Models when processing multiple updates to the same factual information within context. The study reveals that LLMs struggle to accurately retrieve the most recent version of updated facts, with performance degrading as the number of updates increases, similar to memory interference patterns observed in cognitive psychology.

AIBullisharXiv – CS AI · Mar 167/10
🧠

Aligning Language Models from User Interactions

Researchers developed a new method for training AI language models using multi-turn user conversations through self-distillation, leveraging follow-up messages to improve model alignment. Testing on real-world WildChat conversations showed improvements in alignment and instruction-following benchmarks while enabling personalization without explicit feedback.

AINeutralarXiv – CS AI · Mar 167/10
🧠

Semantic Invariance in Agentic AI

Researchers developed a testing framework to evaluate how reliably AI agents maintain consistent reasoning when inputs are semantically equivalent but differently phrased. Their study of seven foundation models across 19 reasoning problems found that larger models aren't necessarily more robust, with the smaller Qwen3-30B-A3B achieving the highest stability at 79.6% invariant responses.

AIBullisharXiv – CS AI · Mar 167/10
🧠

From Garbage to Gold: A Data-Architectural Theory of Predictive Robustness

Researchers propose a new theoretical framework explaining why modern machine learning models achieve robust performance using high-dimensional, error-prone data, challenging the traditional 'Garbage In, Garbage Out' principle. The study introduces concepts like 'Informative Collinearity' and 'Proactive Data-Centric AI' to show how data architecture and model capacity work together to overcome noise and structural uncertainty.

AIBearisharXiv – CS AI · Mar 167/10
🧠

Altered Thoughts, Altered Actions: Probing Chain-of-Thought Vulnerabilities in VLA Robotic Manipulation

Research reveals critical vulnerabilities in Vision-Language-Action robotic models that use chain-of-thought reasoning, where corrupting object names in internal reasoning traces can reduce task success rates by up to 45%. The study shows these AI systems are vulnerable to attacks on their internal reasoning processes, even when primary inputs remain untouched.

AINeutralarXiv – CS AI · Mar 167/10
🧠

HCP-DCNet: A Hierarchical Causal Primitive Dynamic Composition Network for Self-Improving Causal Understanding

Researchers introduce HCP-DCNet, a new AI framework that combines physical dynamics with symbolic causal reasoning to enable AI systems to understand cause-and-effect relationships. The system uses hierarchical causal primitives and can self-improve through interventions, potentially addressing current limitations in AI's ability to handle distribution shifts and counterfactual reasoning.

AIBullisharXiv – CS AI · Mar 167/10
🧠

Revisiting Model Stitching In the Foundation Model Era

Researchers introduce improved methods for stitching Vision Foundation Models (VFMs) like CLIP and DINOv2, enabling integration of different models' strengths. The study proposes VFM Stitch Tree (VST) technique that allows controllable accuracy-latency trade-offs for multimodal applications.

AIBullisharXiv – CS AI · Mar 167/10
🧠

A Geometrically-Grounded Drive for MDL-Based Optimization in Deep Learning

Researchers introduce a novel optimization framework that integrates the Minimum Description Length (MDL) principle directly into deep neural network training dynamics. The method uses geometrically-grounded cognitive manifolds with coupled Ricci flow to create autonomous model simplification while maintaining data fidelity, with theoretical guarantees for convergence and practical O(N log N) complexity.

AIBullisharXiv – CS AI · Mar 167/10
🧠

Cost-Efficient Multimodal LLM Inference via Cross-Tier GPU Heterogeneity

Researchers developed HeteroServe, a system that optimizes multimodal large language model inference by partitioning vision encoding and language generation across different GPU tiers. The approach reduces data transfer requirements and achieves 31-40% cost savings while improving throughput by up to 54% compared to existing systems.

AIBullisharXiv – CS AI · Mar 167/10
🧠

Reinforcement Learning for Diffusion LLMs with Entropy-Guided Step Selection and Stepwise Advantages

Researchers developed a new reinforcement learning approach for training diffusion language models that uses entropy-guided step selection and stepwise advantages to overcome challenges with sequence-level likelihood calculations. The method achieves state-of-the-art results on coding and logical reasoning benchmarks while being more computationally efficient than existing approaches.

AINeutralarXiv – CS AI · Mar 167/10
🧠

Embedded Quantum Machine Learning in Embedded Systems: Feasibility, Hybrid Architectures, and Quantum Co-Processors

Research paper explores embedded quantum machine learning (EQML) feasibility for edge devices like IoT nodes and drones by 2026. The study identifies hybrid workflows and embedded quantum co-processors as the most viable implementation pathways, while highlighting major barriers including latency, data encoding overhead, and energy constraints.

AIBullisharXiv – CS AI · Mar 167/10
🧠

When Drafts Evolve: Speculative Decoding Meets Online Learning

Researchers introduce OnlineSpec, a framework that uses online learning to continuously improve draft models in speculative decoding for large language model inference acceleration. The approach leverages verification feedback to evolve draft models dynamically, achieving up to 24% speedup improvements across seven benchmarks and three foundation models.

AIBullisharXiv – CS AI · Mar 167/10
🧠

LightMoE: Reducing Mixture-of-Experts Redundancy through Expert Replacing

Researchers introduce LightMoE, a new framework that compresses Mixture-of-Experts language models by replacing redundant expert modules with parameter-efficient alternatives. The method achieves 30-50% compression rates while maintaining or improving performance, addressing the substantial memory demands that limit MoE model deployment.

AIBullisharXiv – CS AI · Mar 167/10
🧠

Active Causal Structure Learning with Latent Variables: Towards Learning to Detour in Autonomous Robots

Researchers propose Active Causal Structure Learning with Latent Variables (ACSLWL) as a necessary component for building AGI agents and robots. The paper demonstrates how this approach enables simulated robots to learn complex detour behaviors when encountering unexpected obstacles, allowing them to adapt to new environments by constructing internal causal models.

AIBullisharXiv – CS AI · Mar 167/10
🧠

ARL-Tangram: Unleash the Resource Efficiency in Agentic Reinforcement Learning

Researchers introduced ARL-Tangram, a resource management system that optimizes cloud resource allocation for agentic reinforcement learning tasks involving large language models. The system achieves up to 4.3x faster action completion times and 71.2% resource savings through action-level orchestration, and has been deployed for training MiMo series models.

AIBearisharXiv – CS AI · Mar 167/10
🧠

Experimental evidence of progressive ChatGPT models self-convergence

Research reveals that recent ChatGPT models show declining ability to generate diverse text outputs, a phenomenon called 'model self-convergence.' This degradation is attributed to training on increasing amounts of synthetic data as AI-generated content proliferates across the internet.

🧠 ChatGPT
AIBullisharXiv – CS AI · Mar 167/10
🧠

Learnability and Privacy Vulnerability are Entangled in a Few Critical Weights

Researchers discovered that privacy vulnerabilities in neural networks exist in only a small fraction of weights, but these same weights are critical for model performance. They developed a new approach that preserves privacy by rewinding and fine-tuning only these critical weights instead of retraining entire networks, maintaining utility while defending against membership inference attacks.

← PrevPage 225 of 1219Next →
Filters
Sentiment
Importance
Sort
Stay Updated
Everything combined