y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto

#reinforcement-learning News & Analysis

511 articles tagged with #reinforcement-learning. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.

511 articles
AIBullisharXiv โ€“ CS AI ยท Mar 177/10
๐Ÿง 

SPARQ: Spiking Early-Exit Neural Networks for Energy-Efficient Edge AI

SPARQ introduces a unified framework combining spiking neural networks, quantization-aware training, and reinforcement learning-guided early exits for energy-efficient edge AI. The system achieves up to 5.15% higher accuracy than conventional quantized SNNs while reducing system energy consumption by over 330 times and cutting synaptic operations by over 90%.

AIBullisharXiv โ€“ CS AI ยท Mar 177/10
๐Ÿง 

Masked Auto-Regressive Variational Acceleration: Fast Inference Makes Practical Reinforcement Learning

Researchers introduce MARVAL, a distillation framework that accelerates masked auto-regressive diffusion models by compressing inference into a single step while enabling practical reinforcement learning applications. The method achieves 30x speedup on ImageNet with comparable quality, making RL post-training feasible for the first time with these models.

AIBullisharXiv โ€“ CS AI ยท Mar 177/10
๐Ÿง 

OpenClaw-RL: Train Any Agent Simply by Talking

OpenClaw-RL is a new reinforcement learning framework that enables AI agents to learn continuously from any type of interaction, including conversations, terminal commands, and GUI interactions. The system extracts learning signals from user responses and feedback, allowing agents to improve simply by being used in real-world scenarios.

AIBullisharXiv โ€“ CS AI ยท Mar 177/10
๐Ÿง 

Fine-tuning is Not Enough: A Parallel Framework for Collaborative Imitation and Reinforcement Learning in End-to-end Autonomous Driving

Researchers propose PaIR-Drive, a new parallel framework that combines imitation learning and reinforcement learning for autonomous driving, achieving 91.2 PDMS performance on NAVSIMv1 benchmark. The approach addresses limitations of sequential fine-tuning by running IL and RL in parallel branches, enabling better performance than existing methods.

AIBullisharXiv โ€“ CS AI ยท Mar 167/10
๐Ÿง 

Reinforcement Learning for Diffusion LLMs with Entropy-Guided Step Selection and Stepwise Advantages

Researchers developed a new reinforcement learning approach for training diffusion language models that uses entropy-guided step selection and stepwise advantages to overcome challenges with sequence-level likelihood calculations. The method achieves state-of-the-art results on coding and logical reasoning benchmarks while being more computationally efficient than existing approaches.

AIBullisharXiv โ€“ CS AI ยท Mar 167/10
๐Ÿง 

Guided Policy Optimization under Partial Observability

Researchers introduce Guided Policy Optimization (GPO), a new reinforcement learning framework that addresses challenges in partially observable environments by co-training a guider with privileged information and a learner through imitation learning. The method demonstrates theoretical optimality comparable to direct RL and shows strong empirical performance across various tasks including continuous control and memory-based challenges.

AIBullisharXiv โ€“ CS AI ยท Mar 167/10
๐Ÿง 

DriveMind: A Dual Visual Language Model-based Reinforcement Learning Framework for Autonomous Driving

DriveMind introduces a new AI framework combining vision-language models with reinforcement learning for autonomous driving, achieving significant performance improvements in safety and route completion. The system demonstrates strong cross-domain generalization from simulation to real-world dash-cam data, suggesting practical deployment potential.

AIBullisharXiv โ€“ CS AI ยท Mar 167/10
๐Ÿง 

ARL-Tangram: Unleash the Resource Efficiency in Agentic Reinforcement Learning

Researchers introduced ARL-Tangram, a resource management system that optimizes cloud resource allocation for agentic reinforcement learning tasks involving large language models. The system achieves up to 4.3x faster action completion times and 71.2% resource savings through action-level orchestration, and has been deployed for training MiMo series models.

AIBullisharXiv โ€“ CS AI ยท Mar 127/10
๐Ÿง 

IH-Challenge: A Training Dataset to Improve Instruction Hierarchy on Frontier LLMs

OpenAI researchers introduce IH-Challenge, a reinforcement learning dataset designed to improve instruction hierarchy in frontier LLMs. Fine-tuning GPT-5-Mini with this dataset improved robustness by 10% and significantly reduced unsafe behavior while maintaining helpfulness.

๐Ÿข OpenAI๐Ÿข Hugging Face๐Ÿง  GPT-5
AINeutralarXiv โ€“ CS AI ยท Mar 127/10
๐Ÿง 

Does LLM Alignment Really Need Diversity? An Empirical Study of Adapting RLVR Methods for Moral Reasoning

A comprehensive study comparing reinforcement learning approaches for AI alignment finds that diversity-seeking algorithms don't outperform reward-maximizing methods in moral reasoning tasks. The research demonstrates that moral reasoning has more concentrated high-reward distributions than mathematical reasoning, making standard optimization methods equally effective without explicit diversity mechanisms.

AIBullisharXiv โ€“ CS AI ยท Mar 117/10
๐Ÿง 

AlphaApollo: A System for Deep Agentic Reasoning

AlphaApollo is a new AI reasoning system that addresses limitations in foundation models through multi-turn agentic reasoning, learning, and evolution components. The system demonstrates significant performance improvements across math reasoning benchmarks, with success rates exceeding 85% for tool calls and substantial gains from reinforcement learning across different model scales.

AIBullisharXiv โ€“ CS AI ยท Mar 117/10
๐Ÿง 

From Self-Evolving Synthetic Data to Verifiable-Reward RL: Post-Training Multi-turn Interactive Tool-Using Agents

Researchers developed EigenData, a framework combining self-evolving synthetic data generation with reinforcement learning to train AI agents for multi-turn tool usage and dialogue. The system achieved 73% success on Airline tasks and 98.3% on Telecom benchmarks, matching frontier models while eliminating the need for expensive human annotation.

AIBullisharXiv โ€“ CS AI ยท Mar 117/10
๐Ÿง 

Hindsight Credit Assignment for Long-Horizon LLM Agents

Researchers introduced HCAPO, a new framework that uses hindsight credit assignment to improve Large Language Model agents' performance in long-horizon tasks. The system leverages LLMs as post-hoc critics to refine decision-making, achieving 7.7% and 13.8% improvements over existing methods on WebShop and ALFWorld benchmarks respectively.

AIBullisharXiv โ€“ CS AI ยท Mar 117/10
๐Ÿง 

Stepwise Guided Policy Optimization: Coloring your Incorrect Reasoning in GRPO

Researchers introduce Stepwise Guided Policy Optimization (SGPO), a new framework that improves upon Group Relative Policy Optimization (GRPO) by learning from incorrect reasoning responses in large language model training. SGPO addresses the limitation where GRPO fails to update policies when all responses in a group are incorrect, showing improved performance across multiple model sizes and reasoning benchmarks.

AIBullisharXiv โ€“ CS AI ยท Mar 117/10
๐Ÿง 

Periodic Asynchrony: An On-Policy Approach for Accelerating LLM Reinforcement Learning

Researchers propose a new asynchronous framework for LLM reinforcement learning that separates inference and training deployment, achieving 3-5x improvement in training throughput. The approach maintains on-policy correctness while enabling concurrent inference and training through a producer-consumer pipeline architecture.

AIBullisharXiv โ€“ CS AI ยท Mar 117/10
๐Ÿง 

Reinforcing Numerical Reasoning in LLMs for Tabular Prediction via Structural Priors

Researchers propose PRPO (Permutation Relative Policy Optimization), a reinforcement learning framework that enhances large language models' numerical reasoning capabilities for tabular data prediction. The method achieves performance comparable to supervised baselines while excelling in zero-shot scenarios, with an 8B parameter model outperforming much larger models by up to 53.17%.

AIBullisharXiv โ€“ CS AI ยท Mar 117/10
๐Ÿง 

PlayWorld: Learning Robot World Models from Autonomous Play

PlayWorld introduces a breakthrough AI system that trains robot world simulators entirely from autonomous robot self-play, eliminating the need for human demonstrations. The system achieves 40% improvements in failure prediction and 65% policy performance gains when deployed in real-world scenarios.

AIBullisharXiv โ€“ CS AI ยท Mar 117/10
๐Ÿง 

SATURN: SAT-based Reinforcement Learning to Unleash LLMs Reasoning

Researchers introduce SATURN, a new reinforcement learning framework that uses Boolean Satisfiability (SAT) problems to improve large language models' reasoning capabilities. The framework addresses key limitations in existing RL approaches by enabling scalable task construction, automated verification, and precise difficulty control through curriculum learning.

AIBullisharXiv โ€“ CS AI ยท Mar 97/10
๐Ÿง 

Whatever Remains Must Be True: Filtering Drives Reasoning in LLMs, Shaping Diversity

Researchers propose a new method for training large language models (LLMs) that addresses the diversity loss problem in reinforcement learning approaches. Their technique uses the ฮฑ-divergence family to better balance precision and diversity in reasoning tasks, achieving state-of-the-art performance on theorem-proving benchmarks.

AIBullisharXiv โ€“ CS AI ยท Mar 97/10
๐Ÿง 

RM-R1: Reward Modeling as Reasoning

Researchers introduce RM-R1, a new class of Reasoning Reward Models (ReasRMs) that integrate chain-of-thought reasoning into reward modeling for large language models. The models outperform much larger competitors including GPT-4o by up to 4.9% across reward model benchmarks by using a chain-of-rubrics mechanism and two-stage training process.

๐Ÿง  GPT-4๐Ÿง  Llama
AIBullisharXiv โ€“ CS AI ยท Mar 97/10
๐Ÿง 

Understanding and Improving Hyperbolic Deep Reinforcement Learning

Researchers have developed Hyper++, a new hyperbolic deep reinforcement learning agent that solves optimization challenges in hyperbolic geometry-based RL. The system outperforms previous approaches by 30% in training speed and demonstrates superior performance on benchmark tasks through improved gradient stability and feature regularization.

AIBullisharXiv โ€“ CS AI ยท Mar 97/10
๐Ÿง 

From Entropy to Calibrated Uncertainty: Training Language Models to Reason About Uncertainty

Researchers propose a three-stage pipeline to train Large Language Models to efficiently provide calibrated uncertainty estimates for their responses. The method uses entropy-based scoring, Platt scaling calibration, and reinforcement learning to enable models to reason about uncertainty without computationally expensive post-hoc methods.

AIBullisharXiv โ€“ CS AI ยท Mar 97/10
๐Ÿง 

Artificial Intelligence for Climate Adaptation: Reinforcement Learning for Climate Change-Resilient Transport

Researchers developed a reinforcement learning framework for climate adaptation planning that helps design flood-resilient urban transport systems. The AI-based approach outperformed traditional optimization methods in a Copenhagen case study, discovering better coordinated spatial and temporal adaptation strategies for the 2024-2100 period.

AIBullisharXiv โ€“ CS AI ยท Mar 97/10
๐Ÿง 

TADPO: Reinforcement Learning Goes Off-road

Researchers introduced TADPO, a novel reinforcement learning approach that extends PPO for autonomous off-road driving. The system achieved successful zero-shot sim-to-real transfer on a full-scale off-road vehicle, marking the first RL-based policy deployment on such a platform.

AIBullisharXiv โ€“ CS AI ยท Mar 97/10
๐Ÿง 

DataChef: Cooking Up Optimal Data Recipes for LLM Adaptation via Reinforcement Learning

Researchers introduce DataChef-32B, an AI system that uses reinforcement learning to automatically generate optimal data processing recipes for training large language models. The system eliminates the need for manual data curation by automatically designing complete data pipelines, achieving performance comparable to human experts across six benchmark tasks.