y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto

#reinforcement-learning News & Analysis

500 articles tagged with #reinforcement-learning. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.

500 articles
AIBullisharXiv – CS AI · Apr 77/10
🧠

QED-Nano: Teaching a Tiny Model to Prove Hard Theorems

Researchers developed QED-Nano, a 4B parameter AI model that achieves competitive performance on Olympiad-level mathematical proofs despite being much smaller than proprietary systems. The model uses a three-stage training approach including supervised fine-tuning, reinforcement learning, and reasoning cache expansion to match larger models at a fraction of the inference cost.

🧠 Gemini
AIBearisharXiv – CS AI · Apr 77/10
🧠

Comparative reversal learning reveals rigid adaptation in LLMs under non-stationary uncertainty

Research reveals that large language models like DeepSeek-V3.2, Gemini-3, and GPT-5.2 show rigid adaptation patterns when learning from changing environments, particularly struggling with loss-based learning compared to humans. The study found LLMs demonstrate asymmetric responses to positive versus negative feedback, with some models showing extreme perseveration after environmental changes.

🧠 GPT-5🧠 Gemini
AIBullisharXiv – CS AI · Apr 77/10
🧠

Learning Dexterous Grasping from Sparse Taxonomy Guidance

Researchers developed GRIT, a two-stage AI framework that learns dexterous robotic grasping from sparse taxonomy guidance, achieving 87.9% success rate. The system first predicts grasp specifications from scene context, then generates finger motions while preserving intended grasp structure, improving generalization to novel objects.

AIBullisharXiv – CS AI · Apr 77/10
🧠

Can LLMs Learn to Reason Robustly under Noisy Supervision?

Researchers propose Online Label Refinement (OLR) to improve AI reasoning models' robustness under noisy supervision in Reinforcement Learning with Verifiable Rewards. The method addresses the critical problem of training language models when expert-labeled data contains errors, achieving 3-4% performance gains across mathematical reasoning benchmarks.

AIBullisharXiv – CS AI · Apr 77/10
🧠

Sim2Real-AD: A Modular Sim-to-Real Framework for Deploying VLM-Guided Reinforcement Learning in Real-World Autonomous Driving

Researchers developed Sim2Real-AD, a framework that successfully transfers VLM-guided reinforcement learning policies trained in CARLA simulation to real autonomous vehicles without requiring real-world training data. The system achieved 75-90% success rates in real-world driving scenarios when deployed on a full-scale Ford E-Transit.

AIBullisharXiv – CS AI · Apr 67/10
🧠

JoyAI-LLM Flash: Advancing Mid-Scale LLMs with Token Efficiency

JoyAI-LLM Flash is a new efficient Mixture-of-Experts language model with 48B parameters that activates only 2.7B per forward pass, trained on 20 trillion tokens. The model introduces FiberPO, a novel reinforcement learning algorithm, and achieves higher sparsity ratios than comparable industry models while being released open-source on Hugging Face.

🏢 Hugging Face
AIBullisharXiv – CS AI · Apr 67/10
🧠

GrandCode: Achieving Grandmaster Level in Competitive Programming via Agentic Reinforcement Learning

GrandCode, a new multi-agent reinforcement learning system, has become the first AI to consistently defeat all human competitors in live competitive programming contests, placing first in three recent Codeforces competitions. This breakthrough demonstrates AI has now surpassed even the strongest human programmers in the most challenging coding tasks.

🧠 Gemini
AIBearisharXiv – CS AI · Apr 67/10
🧠

Generalization Limits of Reinforcement Learning Alignment

Researchers discovered that reinforcement learning alignment techniques like RLHF have significant generalization limits, demonstrated through 'compound jailbreaks' that increased attack success rates from 14.3% to 71.4% on OpenAI's gpt-oss-20b model. The study provides empirical evidence that safety training doesn't generalize as broadly as model capabilities, highlighting critical vulnerabilities in current AI alignment approaches.

🏢 OpenAI
AIBullisharXiv – CS AI · Apr 67/10
🧠

Training Multi-Image Vision Agents via End2End Reinforcement Learning

Researchers introduce IMAgent, an open-source visual AI agent trained with reinforcement learning to handle multi-image reasoning tasks. The system addresses limitations of current VLM-based agents that only process single images, using specialized tools for visual reflection and verification to maintain attention on image content throughout inference.

🏢 OpenAI🧠 o1🧠 o3
AIBullisharXiv – CS AI · Apr 67/10
🧠

Mitigating Reward Hacking in RLHF via Advantage Sign Robustness

Researchers propose Sign-Certified Policy Optimization (SignCert-PO) to address reward hacking in reinforcement learning from human feedback (RLHF), a critical problem where AI models exploit learned reward systems rather than improving actual performance. The lightweight approach down-weights non-robust responses during policy optimization and showed improved win rates on summarization and instruction-following benchmarks.

AIBullisharXiv – CS AI · Mar 277/10
🧠

Train at Moving Edge: Online-Verified Prompt Selection for Efficient RL Training of Large Reasoning Model

Researchers propose HIVE, a new framework for training large language models more efficiently in reinforcement learning by selecting high-utility prompts before rollout. The method uses historical reward data and prompt entropy to identify the 'learning edge' where models learn most effectively, significantly reducing computational overhead without performance loss.

AIBullisharXiv – CS AI · Mar 267/10
🧠

From Pixels to Digital Agents: An Empirical Study on the Taxonomy and Technological Trends of Reinforcement Learning Environments

Researchers conducted a large-scale empirical study analyzing over 2,000 publications to map the evolution of reinforcement learning environments. The study reveals a paradigm shift toward two distinct ecosystems: LLM-driven 'Semantic Prior' agents and 'Domain-Specific Generalization' systems, providing a roadmap for next-generation AI simulators.

AIBullisharXiv – CS AI · Mar 267/10
🧠

Reward Is Enough: LLMs Are In-Context Reinforcement Learners

Researchers demonstrate that large language models can perform reinforcement learning during inference through a new 'in-context RL' prompting framework. The method shows LLMs can optimize scalar reward signals to improve response quality across multiple rounds, achieving significant improvements on complex tasks like mathematical competitions and creative writing.

AIBullisharXiv – CS AI · Mar 267/10
🧠

HDPO: Hybrid Distillation Policy Optimization via Privileged Self-Distillation

Researchers introduce Hybrid Distillation Policy Optimization (HDPO), a new method that improves large language model training for mathematical reasoning by addressing 'cliff prompts' where standard reinforcement learning fails. The technique uses privileged self-distillation to provide learning signals for previously unsolvable problems, showing measurable improvements in coverage metrics while maintaining accuracy.

AIBullishIEEE Spectrum – AI · Mar 257/10
🧠

Training Driving AI at 50,000× Real Time

General Motors is developing scalable AI systems that can train autonomous driving at 50,000x real-time speed through high-fidelity simulations. The company combines Vision Language Action models, reinforcement learning, and millions of daily simulations to handle rare 'long-tail' driving scenarios that current systems struggle with.

Training Driving AI at 50,000× Real Time
AIBullisharXiv – CS AI · Mar 177/10
🧠

SPARQ: Spiking Early-Exit Neural Networks for Energy-Efficient Edge AI

SPARQ introduces a unified framework combining spiking neural networks, quantization-aware training, and reinforcement learning-guided early exits for energy-efficient edge AI. The system achieves up to 5.15% higher accuracy than conventional quantized SNNs while reducing system energy consumption by over 330 times and cutting synaptic operations by over 90%.

AIBullisharXiv – CS AI · Mar 177/10
🧠

APEX-Searcher: Augmenting LLMs' Search Capabilities through Agentic Planning and Execution

Researchers introduce APEX-Searcher, a new framework that enhances large language models' search capabilities through a two-stage approach combining reinforcement learning for strategic planning and supervised fine-tuning for execution. The system addresses limitations in multi-hop question answering by decoupling retrieval processes into planning and execution phases, showing significant improvements across multiple benchmarks.

AIBearisharXiv – CS AI · Mar 177/10
🧠

Amplification Effects in Test-Time Reinforcement Learning: Safety and Reasoning Vulnerabilities

Researchers discovered that test-time reinforcement learning (TTRL) methods used to improve AI reasoning capabilities are vulnerable to harmful prompt injections that amplify both safety and harmfulness behaviors. The study shows these methods can be exploited through specially designed 'HarmInject' prompts, leading to reasoning degradation while highlighting the need for safer AI training approaches.

AIBullisharXiv – CS AI · Mar 177/10
🧠

AutoTool: Automatic Scaling of Tool-Use Capabilities in RL via Decoupled Entropy Constraints

Researchers introduce AutoTool, a new reinforcement learning approach that enables AI agents to automatically scale their reasoning capabilities for tool use. The method uses entropy-based optimization and supervised fine-tuning to help models efficiently determine appropriate thinking lengths for simple versus complex problems, achieving 9.8% accuracy improvements while reducing computational overhead by 81%.

AIBullisharXiv – CS AI · Mar 177/10
🧠

OpenClaw-RL: Train Any Agent Simply by Talking

OpenClaw-RL is a new reinforcement learning framework that enables AI agents to learn continuously from any type of interaction, including conversations, terminal commands, and GUI interactions. The system extracts learning signals from user responses and feedback, allowing agents to improve simply by being used in real-world scenarios.

AIBullisharXiv – CS AI · Mar 177/10
🧠

From Passive Observer to Active Critic: Reinforcement Learning Elicits Process Reasoning for Robotic Manipulation

Researchers introduce PRIMO R1, a 7B parameter AI framework that transforms video MLLMs from passive observers into active critics for robotic manipulation tasks. The system uses reinforcement learning to achieve 50% better accuracy than specialized baselines and outperforms 72B-scale models, establishing state-of-the-art performance on the RoboFail benchmark.

🏢 OpenAI🧠 o1
AIBullisharXiv – CS AI · Mar 177/10
🧠

Masked Auto-Regressive Variational Acceleration: Fast Inference Makes Practical Reinforcement Learning

Researchers introduce MARVAL, a distillation framework that accelerates masked auto-regressive diffusion models by compressing inference into a single step while enabling practical reinforcement learning applications. The method achieves 30x speedup on ImageNet with comparable quality, making RL post-training feasible for the first time with these models.

AIBullisharXiv – CS AI · Mar 177/10
🧠

Fine-tuning is Not Enough: A Parallel Framework for Collaborative Imitation and Reinforcement Learning in End-to-end Autonomous Driving

Researchers propose PaIR-Drive, a new parallel framework that combines imitation learning and reinforcement learning for autonomous driving, achieving 91.2 PDMS performance on NAVSIMv1 benchmark. The approach addresses limitations of sequential fine-tuning by running IL and RL in parallel branches, enabling better performance than existing methods.

AIBullisharXiv – CS AI · Mar 177/10
🧠

Towards On-Policy SFT: Distribution Discriminant Theory and its Applications in LLM Training

Researchers propose a new framework called On-Policy SFT that bridges the performance gap between supervised fine-tuning and reinforcement learning in AI model training. The framework introduces Distribution Discriminant Theory (DDT) and two techniques - In-Distribution Finetuning and Hinted Decoding - that achieve better generalization while maintaining computational efficiency.

← PrevPage 2 of 20Next →