y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto

#reinforcement-learning News & Analysis

449 articles tagged with #reinforcement-learning. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.

449 articles
AINeutralarXiv โ€“ CS AI ยท Mar 176/10
๐Ÿง 

Evidence-based Distributional Alignment for Large Language Models

Researchers propose Evi-DA, an evidence-based technique that improves how large language models predict population response distributions across different cultures and domains. The method uses World Values Survey data and reinforcement learning to achieve up to 44% improvement in accuracy compared to existing approaches.

AIBullisharXiv โ€“ CS AI ยท Mar 176/10
๐Ÿง 

REFINE-DP: Diffusion Policy Fine-tuning for Humanoid Loco-manipulation via Reinforcement Learning

Researchers developed REFINE-DP, a hierarchical framework that combines diffusion policies with reinforcement learning to enable humanoid robots to perform complex loco-manipulation tasks. The system achieves over 90% success rate in simulation and demonstrates smooth autonomous execution in real-world environments for tasks like door traversal and object transport.

AIBullisharXiv โ€“ CS AI ยท Mar 176/10
๐Ÿง 

Knowledge Distillation for Large Language Models

Researchers developed a resource-efficient framework for compressing large language models using knowledge distillation and chain-of-thought reinforcement learning. The method successfully compressed Qwen 3B to 0.5B while retaining 70-95% of performance across English, Spanish, and coding tasks, making AI models more suitable for resource-constrained deployments.

AIBullisharXiv โ€“ CS AI ยท Mar 176/10
๐Ÿง 

LLM-Guided Reinforcement Learning for Audio-Visual Speech Enhancement

Researchers have developed a new audio-visual speech enhancement framework that uses Large Language Models and reinforcement learning to improve speech quality. The method outperforms existing baselines by using LLM-generated natural language feedback as rewards for model training, providing more interpretable optimization compared to traditional scalar metrics.

AIBullisharXiv โ€“ CS AI ยท Mar 176/10
๐Ÿง 

Diffusion Reinforcement Learning via Centered Reward Distillation

Researchers present Centered Reward Distillation (CRD), a new reinforcement learning framework for fine-tuning diffusion models that addresses brittleness issues in existing methods. The approach uses within-prompt centering and drift control techniques to achieve state-of-the-art performance in text-to-image generation while reducing reward hacking and convergence issues.

AIBullisharXiv โ€“ CS AI ยท Mar 176/10
๐Ÿง 

From $\boldsymbol{\log\pi}$ to $\boldsymbol{\pi}$: Taming Divergence in Soft Clipping via Bilateral Decoupled Decay of Probability Gradient Weight

Researchers introduce Decoupled Gradient Policy Optimization (DGPO), a new reinforcement learning method that improves large language model training by using probability gradients instead of log-probability gradients. The technique addresses instability issues in current methods while maintaining exploration capabilities, showing superior performance across mathematical benchmarks.

AIBullisharXiv โ€“ CS AI ยท Mar 176/10
๐Ÿง 

VLA-Thinker: Boosting Vision-Language-Action Models through Thinking-with-Image Reasoning

Researchers introduce VLA-Thinker, a new AI framework that enhances Vision-Language-Action models by enabling dynamic visual reasoning during robotic tasks. The system achieved a 97.5% success rate on LIBERO benchmarks through a two-stage training pipeline combining supervised fine-tuning and reinforcement learning.

AIBullisharXiv โ€“ CS AI ยท Mar 176/10
๐Ÿง 

Shorten After You're Right: Lazy Length Penalties for Reasoning RL

Researchers propose a new method to reduce the length of reasoning paths in large AI models like OpenAI o1 and DeepSeek R1 without additional training stages. The approach integrates reward designs directly into reinforcement learning, achieving 40% shorter responses in logic tasks with 14% performance improvement, and 33% reduction in math problems while maintaining accuracy.

๐Ÿข OpenAI๐Ÿง  o1
AIBullisharXiv โ€“ CS AI ยท Mar 176/10
๐Ÿง 

Curriculum Reinforcement Learning from Easy to Hard Tasks Improves LLM Reasoning

Researchers developed E2H Reasoner, a curriculum reinforcement learning method that improves LLM reasoning by training on tasks from easy to hard. The approach shows significant improvements for small LLMs (1.5B-3B parameters) that struggle with vanilla RL training alone.

AIBullisharXiv โ€“ CS AI ยท Mar 176/10
๐Ÿง 

XQC: Well-conditioned Optimization Accelerates Deep Reinforcement Learning

Researchers introduce XQC, a deep reinforcement learning algorithm that achieves state-of-the-art sample efficiency by optimizing the critic network's condition number through batch normalization, weight normalization, and distributional cross-entropy loss. The method outperforms existing approaches across 70 continuous control tasks while using fewer parameters.

AIBullisharXiv โ€“ CS AI ยท Mar 176/10
๐Ÿง 

Slow-Fast Policy Optimization: Reposition-Before-Update for LLM Reasoning

Researchers introduce Slow-Fast Policy Optimization (SFPO), a new reinforcement learning framework that improves training stability and efficiency for large language model reasoning. SFPO outperforms existing methods like GRPO by up to 2.80 points on math benchmarks while requiring up to 4.93x fewer rollouts and 4.19x less training time.

AIBullisharXiv โ€“ CS AI ยท Mar 176/10
๐Ÿง 

GlobalRAG: Enhancing Global Reasoning in Multi-hop Question Answering via Reinforcement Learning

GlobalRAG is a new reinforcement learning framework that significantly improves multi-hop question answering by decomposing questions into subgoals and coordinating retrieval with reasoning. The system achieves 14.2% average improvements in performance metrics while using only 42% of the training data required by baseline models.

AIBullisharXiv โ€“ CS AI ยท Mar 176/10
๐Ÿง 

Imagine-then-Plan: Agent Learning from Adaptive Lookahead with World Models

Researchers introduce Imagine-then-Plan (ITP), a new AI framework that enables agents to learn through adaptive lookahead imagination using world models. The system allows AI agents to simulate multi-step future scenarios and adjust planning horizons dynamically, significantly outperforming existing methods in benchmark tests.

AIBullisharXiv โ€“ CS AI ยท Mar 166/10
๐Ÿง 

FastDSAC: Unlocking the Potential of Maximum Entropy RL in High-Dimensional Humanoid Control

Researchers introduce FastDSAC, a new framework that successfully applies Maximum Entropy Reinforcement Learning to high-dimensional humanoid control tasks. The system uses Dimension-wise Entropy Modulation and continuous distributional critics to achieve 180% and 400% performance gains on challenging control tasks compared to deterministic methods.

AIBullisharXiv โ€“ CS AI ยท Mar 166/10
๐Ÿง 

Visual-ERM: Reward Modeling for Visual Equivalence

Researchers introduce Visual-ERM, a multimodal reward model that improves vision-to-code tasks by evaluating visual equivalence in rendered outputs rather than relying on text-based rules. The system achieves significant performance gains on chart-to-code tasks (+8.4) and shows consistent improvements across table and SVG parsing applications.

AIBullisharXiv โ€“ CS AI ยท Mar 166/10
๐Ÿง 

CRAFT-GUI: Curriculum-Reinforced Agent For GUI Tasks

Researchers introduce CRAFT-GUI, a curriculum learning framework that uses reinforcement learning to improve AI agents' performance in graphical user interface tasks. The method addresses difficulty variation across GUI tasks and provides more nuanced feedback, achieving 5.6% improvement on Android Control benchmarks and 10.3% on internal benchmarks.

AIBullisharXiv โ€“ CS AI ยท Mar 166/10
๐Ÿง 

Information-Consistent Language Model Recommendations through Group Relative Policy Optimization

Researchers developed a new reinforcement learning framework using Group Relative Policy Optimization (GRPO) to make Large Language Models provide consistent recommendations across semantically equivalent prompts. The method addresses a critical enterprise need for reliable AI systems in business domains like finance and customer support, where inconsistent responses undermine trust and compliance.

AIBullisharXiv โ€“ CS AI ยท Mar 126/10
๐Ÿง 

Adaptive RAN Slicing Control via Reward-Free Self-Finetuning Agents

Researchers propose a novel self-finetuning framework for AI agents that enables continuous learning without handcrafted rewards, demonstrating superior performance in dynamic Radio Access Network slicing tasks. The approach uses bi-perspective reflection to generate autonomous feedback and distill long-term experiences into model parameters, outperforming traditional reinforcement learning methods.

AIBullisharXiv โ€“ CS AI ยท Mar 126/10
๐Ÿง 

Towards Cold-Start Drafting and Continual Refining: A Value-Driven Memory Approach with Application to NPU Kernel Synthesis

Researchers introduce EvoKernel, a self-evolving AI framework that addresses the 'Data Wall' problem in deploying Large Language Models for kernel synthesis on data-scarce hardware platforms like NPUs. The system uses memory-based reinforcement learning to improve correctness from 11% to 83% and achieves 3.60x speedup through iterative refinement.

AIBullisharXiv โ€“ CS AI ยท Mar 126/10
๐Ÿง 

Dynamics-Predictive Sampling for Active RL Finetuning of Large Reasoning Models

Researchers propose Dynamics-Predictive Sampling (DPS), a new method that improves reinforcement learning finetuning of large language models by predicting which training prompts will be most informative without expensive computational rollouts. The technique models each prompt's learning progress as a dynamical system and uses Bayesian inference to select better training data, reducing computational overhead while achieving superior reasoning performance.

AIBullisharXiv โ€“ CS AI ยท Mar 126/10
๐Ÿง 

CLIPO: Contrastive Learning in Policy Optimization Generalizes RLVR

Researchers introduce CLIPO (Contrastive Learning in Policy Optimization), a new method that improves upon Reinforcement Learning with Verifiable Rewards (RLVR) for training Large Language Models. CLIPO addresses hallucination and answer-copying issues by incorporating contrastive learning to better capture correct reasoning patterns across multiple solution paths.

AIBullisharXiv โ€“ CS AI ยท Mar 116/10
๐Ÿง 

Evaluate-as-Action: Self-Evaluated Process Rewards for Retrieval-Augmented Agents

Researchers propose EvalAct, a new method that improves retrieval-augmented AI agents by converting retrieval quality assessment into explicit actions and using Process-Calibrated Advantage Rescaling (PCAR) for optimization. The approach shows superior performance on multi-step reasoning tasks across seven open-domain QA benchmarks by providing better process-level feedback signals.

AIBullisharXiv โ€“ CS AI ยท Mar 116/10
๐Ÿง 

Social-R1: Towards Human-like Social Reasoning in LLMs

Researchers introduce Social-R1, a reinforcement learning framework that enhances social reasoning in large language models by training on adversarial examples. The approach enables a 4B parameter model to outperform larger models across eight benchmarks by supervising the entire reasoning process rather than just outcomes.

AIBullisharXiv โ€“ CS AI ยท Mar 96/10
๐Ÿง 

Boosting deep Reinforcement Learning using pretraining with Logical Options

Researchers propose Hybrid Hierarchical RL (HยฒRL), a new framework that combines symbolic logic with deep reinforcement learning to address misalignment issues in AI agents. The method uses logical option-based pretraining to improve long-horizon decision-making and prevent agents from over-exploiting short-term rewards.