y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto

#reinforcement-learning News & Analysis

511 articles tagged with #reinforcement-learning. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.

511 articles
AINeutralarXiv โ€“ CS AI ยท Mar 36/108
๐Ÿง 

Theoretical Perspectives on Data Quality and Synergistic Effects in Pre- and Post-Training Reasoning Models

New theoretical research analyzes how Large Language Models learn during pretraining versus post-training phases, revealing that balanced pretraining data creates latent capabilities activated later, while supervised fine-tuning works best on small, challenging datasets and reinforcement learning requires large-scale data that isn't overly difficult.

AINeutralarXiv โ€“ CS AI ยท Mar 37/108
๐Ÿง 

Align and Filter: Improving Performance in Asynchronous On-Policy RL

Researchers propose a new method called total Variation-based Advantage aligned Constrained policy Optimization to address policy lag issues in distributed reinforcement learning systems. The approach aims to improve performance when scaling on-policy learning algorithms by mitigating the mismatch between behavior and learning policies during high-frequency updates.

AIBullisharXiv โ€“ CS AI ยท Mar 37/108
๐Ÿง 

GAC: Stabilizing Asynchronous RL Training for LLMs via Gradient Alignment Control

Researchers propose GAC (Gradient Alignment Control), a new method to stabilize asynchronous reinforcement learning training for large language models. The technique addresses training instability issues that arise when scaling RL to modern AI workloads by regulating gradient alignment and preventing overshooting.

$NEAR
AIBullisharXiv โ€“ CS AI ยท Mar 37/107
๐Ÿง 

LFPO: Likelihood-Free Policy Optimization for Masked Diffusion Models

Researchers propose Likelihood-Free Policy Optimization (LFPO), a new framework for improving Diffusion Large Language Models by bypassing likelihood computation issues that plague existing methods. LFPO uses geometric velocity rectification to optimize denoising logits directly, achieving better performance on code and reasoning tasks while reducing inference time by 20%.

AIBullisharXiv โ€“ CS AI ยท Mar 36/108
๐Ÿง 

MVR: Multi-view Video Reward Shaping for Reinforcement Learning

Researchers introduce Multi-View Video Reward Shaping (MVR), a new reinforcement learning framework that uses multi-viewpoint video analysis and vision-language models to improve reward design for complex AI tasks. The system addresses limitations of single-image approaches by analyzing dynamic motions across multiple camera angles, showing improved performance on humanoid locomotion and manipulation tasks.

AIBullisharXiv โ€“ CS AI ยท Mar 36/104
๐Ÿง 

Reason Like a Radiologist: Chain-of-Thought and Reinforcement Learning for Verifiable Report Generation

Researchers introduce BoxMed-RL, a new AI framework that uses chain-of-thought reasoning and reinforcement learning to generate spatially verifiable radiology reports. The system mimics radiologist workflows by linking visual findings to precise anatomical locations, achieving 7% improvement over existing methods in key performance metrics.

$LINK
AIBullisharXiv โ€“ CS AI ยท Mar 36/104
๐Ÿง 

FAuNO: Semi-Asynchronous Federated Reinforcement Learning Framework for Task Offloading in Edge Systems

Researchers have developed FAuNO, a new federated reinforcement learning framework that uses asynchronous processing to optimize task distribution in edge computing networks. The system employs an actor-critic architecture where local nodes learn specific dynamics while a central critic coordinates overall system performance, demonstrating superior results in reducing latency and task loss compared to existing methods.

AIBullisharXiv โ€“ CS AI ยท Mar 36/104
๐Ÿง 

A Message Passing Realization of Expected Free Energy Minimization

Researchers developed a message passing approach for Expected Free Energy minimization that transforms complex combinatorial search problems into tractable inference problems. The method enables more efficient AI agent planning and exploration under uncertainty, outperforming conventional approaches in test environments.

AINeutralarXiv โ€“ CS AI ยท Mar 36/104
๐Ÿง 

From Efficiency to Adaptivity: A Deeper Look at Adaptive Reasoning in Large Language Models

Researchers present a new framework for adaptive reasoning in large language models, addressing the problem that current LLMs use uniform reasoning strategies regardless of task complexity. The survey formalizes adaptive reasoning as a control-augmented policy optimization problem and proposes a taxonomy of training-based and training-free approaches to achieve more efficient reasoning allocation.

AIBullisharXiv โ€“ CS AI ยท Mar 36/103
๐Ÿง 

Quark Medical Alignment: A Holistic Multi-Dimensional Alignment and Collaborative Optimization Paradigm

Researchers propose a new medical alignment paradigm for large language models that addresses the shortcomings of current reinforcement learning approaches in high-stakes medical question answering. The framework introduces a multi-dimensional alignment matrix and unified optimization mechanism to simultaneously optimize correctness, safety, and compliance in medical AI applications.

AIBullisharXiv โ€“ CS AI ยท Mar 36/104
๐Ÿง 

AdaptVision: Efficient Vision-Language Models via Adaptive Visual Acquisition

Researchers introduce AdaptVision, a new Vision-Language Model that reduces computational overhead by adaptively determining the minimum visual tokens needed per sample. The model uses a coarse-to-fine approach with reinforcement learning to balance accuracy and efficiency, achieving superior performance while consuming fewer visual tokens than existing methods.

AIBullisharXiv โ€“ CS AI ยท Mar 36/104
๐Ÿง 

Phase-Aware Mixture of Experts for Agentic Reinforcement Learning

Researchers propose Phase-Aware Mixture of Experts (PA-MoE) to improve reinforcement learning for LLM agents by addressing simplicity bias where simple tasks dominate network parameters. The approach uses a phase router to maintain temporal consistency in expert assignments, allowing better specialization for complex tasks.

AIBullisharXiv โ€“ CS AI ยท Mar 36/104
๐Ÿง 

Scalable Multi-Task Learning through Spiking Neural Networks with Adaptive Task-Switching Policy for Intelligent Autonomous Agents

Researchers have developed SwitchMT, a novel methodology using Spiking Neural Networks with adaptive task-switching for multi-task learning in autonomous agents. The approach addresses task interference issues and demonstrates competitive performance in multiple Atari games while maintaining low power consumption and network complexity.

AIBullisharXiv โ€“ CS AI ยท Mar 36/104
๐Ÿง 

Intention-Conditioned Flow Occupancy Models

Researchers introduce Intention-Conditioned Flow Occupancy Models (InFOM), a new reinforcement learning approach that uses flow matching to predict future states and incorporates user intention as a latent variable. The method demonstrates significant improvements with 1.8x median return improvement and 36% higher success rates across 40 benchmark tasks.

AINeutralarXiv โ€“ CS AI ยท Mar 36/104
๐Ÿง 

Distributions as Actions: A Unified Framework for Diverse Action Spaces

Researchers introduce a new reinforcement learning framework called Distributions-as-Actions (DA) that treats parameterized action distributions as actions, making all action spaces continuous regardless of original type. The approach includes a new policy gradient estimator (DA-PG) with lower variance and a practical actor-critic algorithm (DA-AC) that shows competitive performance across discrete, continuous, and hybrid control tasks.

AIBullisharXiv โ€“ CS AI ยท Mar 36/104
๐Ÿง 

RL for Reasoning by Adaptively Revealing Rationales

Researchers introduce AdaBack, a new reinforcement learning algorithm that uses partial supervision to help AI models learn complex reasoning tasks. The method dynamically adjusts the amount of guidance provided to each training sample, enabling models to solve mathematical reasoning problems that traditional supervised learning and reinforcement learning methods cannot handle.

AIBullisharXiv โ€“ CS AI ยท Mar 36/104
๐Ÿง 

Post-training Large Language Models for Diverse High-Quality Responses

Researchers have developed DQO (Diversity Quality Optimization), a new training method that uses determinantal point processes to improve large language models' response diversity while maintaining quality. The approach addresses a key limitation of current reinforcement learning methods that tend to narrow LLM outputs to canonical responses.

AIBullisharXiv โ€“ CS AI ยท Mar 36/103
๐Ÿง 

Quantile Advantage Estimation: Stabilizing RLVR for LLM Reasoning

Researchers propose Quantile Advantage Estimation (QAE) to stabilize Reinforcement Learning with Verifiable Rewards (RLVR) for large language model reasoning. The method replaces mean baselines with group-wise K-quantile baselines to prevent entropy collapse and explosion, showing sustained improvements on mathematical reasoning tasks.

AIBullisharXiv โ€“ CS AI ยท Mar 36/104
๐Ÿง 

Group-Relative REINFORCE Is Secretly an Off-Policy Algorithm: Demystifying Some Myths About GRPO and Its Friends

Researchers demonstrate that Group Relative Policy Optimization (GRPO), traditionally viewed as an on-policy reinforcement learning algorithm, can be reinterpreted as an off-policy algorithm through first-principles analysis. This theoretical breakthrough provides new insights for optimizing reinforcement learning applications in large language models and offers principled approaches for off-policy RL algorithm design.

AIBullisharXiv โ€“ CS AI ยท Mar 35/104
๐Ÿง 

Reference Grounded Skill Discovery

Researchers developed Reference-Grounded Skill Discovery (RGSD), a new AI algorithm that enables high-dimensional agents to learn complex skills by grounding discovery in semantically meaningful reference data. The method successfully taught a simulated humanoid with 359-dimensional observations to imitate and vary behaviors like walking, running, and punching while outperforming traditional imitation learning approaches.

AIBullisharXiv โ€“ CS AI ยท Mar 27/1016
๐Ÿง 

SMAC: Score-Matched Actor-Critics for Robust Offline-to-Online Transfer

Researchers developed Score Matched Actor-Critic (SMAC), a new offline reinforcement learning method that enables smooth transition to online RL algorithms without performance drops. SMAC achieved successful transfer in all 6 D4RL tasks tested and reduced regret by 34-58% in 4 of 6 environments compared to best baselines.

AIBullisharXiv โ€“ CS AI ยท Mar 27/1022
๐Ÿง 

Embodiment-Aware Generalist Specialist Distillation for Unified Humanoid Whole-Body Control

Researchers introduce EAGLE, a reinforcement learning framework that creates unified control policies for multiple different humanoid robots without per-robot tuning. The system uses iterative generalist-specialist distillation to enable a single AI controller to manage diverse humanoid embodiments and support complex behaviors beyond basic walking.

AIBullisharXiv โ€“ CS AI ยท Mar 27/1021
๐Ÿง 

DeepEyesV2: Toward Agentic Multimodal Model

DeepEyesV2 is a new agentic multimodal AI model that combines text and image comprehension with external tool integration like code execution and web search. The research introduces a two-stage training pipeline and RealX-Bench evaluation framework, demonstrating improved real-world reasoning capabilities through adaptive tool invocation.