511 articles tagged with #reinforcement-learning. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.
AINeutralarXiv โ CS AI ยท Mar 36/108
๐ง New theoretical research analyzes how Large Language Models learn during pretraining versus post-training phases, revealing that balanced pretraining data creates latent capabilities activated later, while supervised fine-tuning works best on small, challenging datasets and reinforcement learning requires large-scale data that isn't overly difficult.
AINeutralarXiv โ CS AI ยท Mar 37/108
๐ง Researchers propose a new method called total Variation-based Advantage aligned Constrained policy Optimization to address policy lag issues in distributed reinforcement learning systems. The approach aims to improve performance when scaling on-policy learning algorithms by mitigating the mismatch between behavior and learning policies during high-frequency updates.
AIBullisharXiv โ CS AI ยท Mar 37/108
๐ง Researchers propose GAC (Gradient Alignment Control), a new method to stabilize asynchronous reinforcement learning training for large language models. The technique addresses training instability issues that arise when scaling RL to modern AI workloads by regulating gradient alignment and preventing overshooting.
$NEAR
AIBullisharXiv โ CS AI ยท Mar 37/107
๐ง Researchers propose Likelihood-Free Policy Optimization (LFPO), a new framework for improving Diffusion Large Language Models by bypassing likelihood computation issues that plague existing methods. LFPO uses geometric velocity rectification to optimize denoising logits directly, achieving better performance on code and reasoning tasks while reducing inference time by 20%.
AIBullisharXiv โ CS AI ยท Mar 36/108
๐ง Researchers introduce Multi-View Video Reward Shaping (MVR), a new reinforcement learning framework that uses multi-viewpoint video analysis and vision-language models to improve reward design for complex AI tasks. The system addresses limitations of single-image approaches by analyzing dynamic motions across multiple camera angles, showing improved performance on humanoid locomotion and manipulation tasks.
AIBullisharXiv โ CS AI ยท Mar 36/104
๐ง Researchers introduce BoxMed-RL, a new AI framework that uses chain-of-thought reasoning and reinforcement learning to generate spatially verifiable radiology reports. The system mimics radiologist workflows by linking visual findings to precise anatomical locations, achieving 7% improvement over existing methods in key performance metrics.
$LINK
AIBullisharXiv โ CS AI ยท Mar 36/104
๐ง Researchers have developed FAuNO, a new federated reinforcement learning framework that uses asynchronous processing to optimize task distribution in edge computing networks. The system employs an actor-critic architecture where local nodes learn specific dynamics while a central critic coordinates overall system performance, demonstrating superior results in reducing latency and task loss compared to existing methods.
AIBullisharXiv โ CS AI ยท Mar 36/104
๐ง Researchers developed a message passing approach for Expected Free Energy minimization that transforms complex combinatorial search problems into tractable inference problems. The method enables more efficient AI agent planning and exploration under uncertainty, outperforming conventional approaches in test environments.
AINeutralarXiv โ CS AI ยท Mar 36/104
๐ง Researchers present a new framework for adaptive reasoning in large language models, addressing the problem that current LLMs use uniform reasoning strategies regardless of task complexity. The survey formalizes adaptive reasoning as a control-augmented policy optimization problem and proposes a taxonomy of training-based and training-free approaches to achieve more efficient reasoning allocation.
AIBullisharXiv โ CS AI ยท Mar 36/103
๐ง Researchers propose a new medical alignment paradigm for large language models that addresses the shortcomings of current reinforcement learning approaches in high-stakes medical question answering. The framework introduces a multi-dimensional alignment matrix and unified optimization mechanism to simultaneously optimize correctness, safety, and compliance in medical AI applications.
AIBullisharXiv โ CS AI ยท Mar 36/104
๐ง Researchers introduce AdaptVision, a new Vision-Language Model that reduces computational overhead by adaptively determining the minimum visual tokens needed per sample. The model uses a coarse-to-fine approach with reinforcement learning to balance accuracy and efficiency, achieving superior performance while consuming fewer visual tokens than existing methods.
AIBullisharXiv โ CS AI ยท Mar 36/104
๐ง Researchers propose Phase-Aware Mixture of Experts (PA-MoE) to improve reinforcement learning for LLM agents by addressing simplicity bias where simple tasks dominate network parameters. The approach uses a phase router to maintain temporal consistency in expert assignments, allowing better specialization for complex tasks.
AIBullisharXiv โ CS AI ยท Mar 36/104
๐ง Researchers have developed SwitchMT, a novel methodology using Spiking Neural Networks with adaptive task-switching for multi-task learning in autonomous agents. The approach addresses task interference issues and demonstrates competitive performance in multiple Atari games while maintaining low power consumption and network complexity.
AIBullisharXiv โ CS AI ยท Mar 36/104
๐ง Researchers introduce Intention-Conditioned Flow Occupancy Models (InFOM), a new reinforcement learning approach that uses flow matching to predict future states and incorporates user intention as a latent variable. The method demonstrates significant improvements with 1.8x median return improvement and 36% higher success rates across 40 benchmark tasks.
AINeutralarXiv โ CS AI ยท Mar 36/104
๐ง Researchers introduce a new reinforcement learning framework called Distributions-as-Actions (DA) that treats parameterized action distributions as actions, making all action spaces continuous regardless of original type. The approach includes a new policy gradient estimator (DA-PG) with lower variance and a practical actor-critic algorithm (DA-AC) that shows competitive performance across discrete, continuous, and hybrid control tasks.
AIBullisharXiv โ CS AI ยท Mar 36/104
๐ง Researchers introduce AdaBack, a new reinforcement learning algorithm that uses partial supervision to help AI models learn complex reasoning tasks. The method dynamically adjusts the amount of guidance provided to each training sample, enabling models to solve mathematical reasoning problems that traditional supervised learning and reinforcement learning methods cannot handle.
AIBullisharXiv โ CS AI ยท Mar 36/104
๐ง Researchers have developed DQO (Diversity Quality Optimization), a new training method that uses determinantal point processes to improve large language models' response diversity while maintaining quality. The approach addresses a key limitation of current reinforcement learning methods that tend to narrow LLM outputs to canonical responses.
AINeutralarXiv โ CS AI ยท Mar 36/103
๐ง Researchers propose rubric-based reward modeling to address reward over-optimization in large language model fine-tuning. The approach focuses on the high-reward tail where models struggle to distinguish excellent responses from merely great ones, using off-policy examples to improve training effectiveness.
AIBullisharXiv โ CS AI ยท Mar 36/103
๐ง Researchers propose Quantile Advantage Estimation (QAE) to stabilize Reinforcement Learning with Verifiable Rewards (RLVR) for large language model reasoning. The method replaces mean baselines with group-wise K-quantile baselines to prevent entropy collapse and explosion, showing sustained improvements on mathematical reasoning tasks.
AIBullisharXiv โ CS AI ยท Mar 36/104
๐ง Researchers demonstrate that Group Relative Policy Optimization (GRPO), traditionally viewed as an on-policy reinforcement learning algorithm, can be reinterpreted as an off-policy algorithm through first-principles analysis. This theoretical breakthrough provides new insights for optimizing reinforcement learning applications in large language models and offers principled approaches for off-policy RL algorithm design.
AIBullisharXiv โ CS AI ยท Mar 35/104
๐ง Researchers developed Reference-Grounded Skill Discovery (RGSD), a new AI algorithm that enables high-dimensional agents to learn complex skills by grounding discovery in semantically meaningful reference data. The method successfully taught a simulated humanoid with 359-dimensional observations to imitate and vary behaviors like walking, running, and punching while outperforming traditional imitation learning approaches.
AIBullisharXiv โ CS AI ยท Mar 36/104
๐ง Researchers introduce MENLO, a new framework for evaluating native-like quality in large language model responses across 47 languages. The study reveals significant improvements in multilingual LLM performance through reinforcement learning and fine-tuning, though gaps with human judgment persist.
AIBullisharXiv โ CS AI ยท Mar 27/1016
๐ง Researchers developed Score Matched Actor-Critic (SMAC), a new offline reinforcement learning method that enables smooth transition to online RL algorithms without performance drops. SMAC achieved successful transfer in all 6 D4RL tasks tested and reduced regret by 34-58% in 4 of 6 environments compared to best baselines.
AIBullisharXiv โ CS AI ยท Mar 27/1022
๐ง Researchers introduce EAGLE, a reinforcement learning framework that creates unified control policies for multiple different humanoid robots without per-robot tuning. The system uses iterative generalist-specialist distillation to enable a single AI controller to manage diverse humanoid embodiments and support complex behaviors beyond basic walking.
AIBullisharXiv โ CS AI ยท Mar 27/1021
๐ง DeepEyesV2 is a new agentic multimodal AI model that combines text and image comprehension with external tool integration like code execution and web search. The research introduces a two-stage training pipeline and RealX-Bench evaluation framework, demonstrating improved real-world reasoning capabilities through adaptive tool invocation.