y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto

#policy-optimization News & Analysis

31 articles tagged with #policy-optimization. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.

31 articles
AIBullisharXiv – CS AI · 3d ago7/10
🧠

SafeAdapt: Provably Safe Policy Updates in Deep Reinforcement Learning

Researchers introduce SafeAdapt, a novel framework for updating reinforcement learning policies while maintaining provable safety guarantees across changing environments. The approach uses a 'Rashomon set' to identify safe parameter regions and projects policy updates onto this certified space, addressing the critical challenge of deploying RL agents in safety-critical applications where dynamics and objectives evolve over time.

AIBullisharXiv – CS AI · Apr 67/10
🧠

Mitigating Reward Hacking in RLHF via Advantage Sign Robustness

Researchers propose Sign-Certified Policy Optimization (SignCert-PO) to address reward hacking in reinforcement learning from human feedback (RLHF), a critical problem where AI models exploit learned reward systems rather than improving actual performance. The lightweight approach down-weights non-robust responses during policy optimization and showed improved win rates on summarization and instruction-following benchmarks.

AIBullisharXiv – CS AI · Mar 167/10
🧠

Guided Policy Optimization under Partial Observability

Researchers introduce Guided Policy Optimization (GPO), a new reinforcement learning framework that addresses challenges in partially observable environments by co-training a guider with privileged information and a learner through imitation learning. The method demonstrates theoretical optimality comparable to direct RL and shows strong empirical performance across various tasks including continuous control and memory-based challenges.

AIBullisharXiv – CS AI · Mar 117/10
🧠

Stepwise Guided Policy Optimization: Coloring your Incorrect Reasoning in GRPO

Researchers introduce Stepwise Guided Policy Optimization (SGPO), a new framework that improves upon Group Relative Policy Optimization (GRPO) by learning from incorrect reasoning responses in large language model training. SGPO addresses the limitation where GRPO fails to update policies when all responses in a group are incorrect, showing improved performance across multiple model sizes and reasoning benchmarks.

AIBullisharXiv – CS AI · Mar 56/10
🧠

GIPO: Gaussian Importance Sampling Policy Optimization

GIPO (Gaussian Importance Sampling Policy Optimization) is a new reinforcement learning method that improves data efficiency for training multimodal AI agents. The approach uses Gaussian trust weights instead of hard clipping to better handle scarce or outdated training data, showing superior performance and stability across various experimental conditions.

AIBullisharXiv – CS AI · Mar 46/103
🧠

RAPO: Expanding Exploration for LLM Agents via Retrieval-Augmented Policy Optimization

Researchers introduce RAPO (Retrieval-Augmented Policy Optimization), a new reinforcement learning framework that improves LLM agent training by incorporating retrieval mechanisms for broader exploration. The method achieves 5% performance gains across 14 datasets and 1.2x faster training efficiency by using hybrid-policy rollouts and retrieval-aware optimization.

AIBullisharXiv – CS AI · Mar 37/104
🧠

Stable Asynchrony: Variance-Controlled Off-Policy RL for LLMs

MIT researchers introduce VCPO (Variance Controlled Policy Optimization), a new method that improves asynchronous reinforcement learning for LLM training by addressing high variance issues in off-policy settings. The technique dynamically scales learning rates and applies variance control to achieve stable training with 2.5x speedup while maintaining performance.

AINeutralarXiv – CS AI · 3d ago6/10
🧠

StaRPO: Stability-Augmented Reinforcement Policy Optimization

Researchers propose StaRPO, a reinforcement learning framework that improves large language model reasoning by incorporating stability metrics alongside task rewards. The method uses Autocorrelation Function and Path Efficiency measurements to evaluate logical coherence and goal-directedness, demonstrating improved accuracy and reasoning consistency across four benchmarks.

AINeutralarXiv – CS AI · 3d ago6/10
🧠

Visually-Guided Policy Optimization for Multimodal Reasoning

Researchers propose Visually-Guided Policy Optimization (VGPO), a framework that enhances vision-language models' ability to focus on visual information during reasoning tasks. The method addresses a fundamental limitation where text-dominated VLMs suffer from weak visual attention and temporal visual forgetting, improving performance on multimodal reasoning and visual-dependent tasks.

AINeutralarXiv – CS AI · 6d ago6/10
🧠

Reason in Chains, Learn in Trees: Self-Rectification and Grafting for Multi-turn Agent Policy Optimization

Researchers propose T-STAR, a novel reinforcement learning framework that structures multi-step agent trajectories as trees rather than independent chains, enabling better credit assignment for LLM agents. The method uses tree-based reward propagation and surgical policy optimization to improve reasoning performance across embodied, interactive, and planning tasks.

AIBullisharXiv – CS AI · Mar 166/10
🧠

CRAFT-GUI: Curriculum-Reinforced Agent For GUI Tasks

Researchers introduce CRAFT-GUI, a curriculum learning framework that uses reinforcement learning to improve AI agents' performance in graphical user interface tasks. The method addresses difficulty variation across GUI tasks and provides more nuanced feedback, achieving 5.6% improvement on Android Control benchmarks and 10.3% on internal benchmarks.

AIBullisharXiv – CS AI · Mar 126/10
🧠

CLIPO: Contrastive Learning in Policy Optimization Generalizes RLVR

Researchers introduce CLIPO (Contrastive Learning in Policy Optimization), a new method that improves upon Reinforcement Learning with Verifiable Rewards (RLVR) for training Large Language Models. CLIPO addresses hallucination and answer-copying issues by incorporating contrastive learning to better capture correct reasoning patterns across multiple solution paths.

AIBullisharXiv – CS AI · Mar 66/10
🧠

EvoTool: Self-Evolving Tool-Use Policy Optimization in LLM Agents via Blame-Aware Mutation and Diversity-Aware Selection

Researchers propose EvoTool, a new framework that optimizes AI agent tool-use policies through evolutionary algorithms rather than traditional gradient-based methods. The system decomposes agent policies into four modules and uses blame attribution and targeted mutations to improve performance, showing over 5-point improvements on benchmarks.

🧠 GPT-4
AIBullisharXiv – CS AI · Mar 36/108
🧠

InfoPO: Information-Driven Policy Optimization for User-Centric Agents

Researchers introduce InfoPO (Information-Driven Policy Optimization), a new method that improves AI agent interactions by using information-gain rewards to identify valuable conversation turns. The approach addresses credit assignment problems in multi-turn interactions and outperforms existing baselines across diverse tasks including intent clarification and collaborative coding.

AIBullisharXiv – CS AI · Mar 37/108
🧠

MemPO: Self-Memory Policy Optimization for Long-Horizon Agents

Researchers propose MemPO (Self-Memory Policy Optimization), a new algorithm that enables AI agents to autonomously manage their memory during long-horizon tasks. The method achieves significant performance improvements with 25.98% F1 score gains over base models while reducing token usage by 67.58%.

AIBullisharXiv – CS AI · Mar 37/109
🧠

HiMAC: Hierarchical Macro-Micro Learning for Long-Horizon LLM Agents

Researchers introduce HiMAC, a hierarchical reinforcement learning framework that improves LLM agent performance on long-horizon tasks by separating macro-level planning from micro-level execution. The approach demonstrates state-of-the-art results across multiple environments, showing that structured hierarchy is more effective than simply scaling model size for complex agent tasks.

AIBullisharXiv – CS AI · Mar 36/108
🧠

FlowPortrait: Reinforcement Learning for Audio-Driven Portrait Video Generation

FlowPortrait is a new reinforcement learning framework that uses Multimodal Large Language Models for evaluation to generate more realistic talking-head videos with better lip synchronization. The system combines human-aligned assessment with policy optimization techniques to address persistent issues in audio-driven portrait animation.

AIBullisharXiv – CS AI · Mar 36/109
🧠

Provable and Practical In-Context Policy Optimization for Self-Improvement

Researchers introduce In-Context Policy Optimization (ICPO), a new method that allows AI models to improve their responses during inference through multi-round self-reflection without parameter updates. The practical ME-ICPO algorithm demonstrates competitive performance on mathematical reasoning tasks while maintaining affordable inference costs.

AINeutralarXiv – CS AI · Mar 37/108
🧠

Align and Filter: Improving Performance in Asynchronous On-Policy RL

Researchers propose a new method called total Variation-based Advantage aligned Constrained policy Optimization to address policy lag issues in distributed reinforcement learning systems. The approach aims to improve performance when scaling on-policy learning algorithms by mitigating the mismatch between behavior and learning policies during high-frequency updates.

AIBullisharXiv – CS AI · Mar 36/104
🧠

Group-Relative REINFORCE Is Secretly an Off-Policy Algorithm: Demystifying Some Myths About GRPO and Its Friends

Researchers demonstrate that Group Relative Policy Optimization (GRPO), traditionally viewed as an on-policy reinforcement learning algorithm, can be reinterpreted as an off-policy algorithm through first-principles analysis. This theoretical breakthrough provides new insights for optimizing reinforcement learning applications in large language models and offers principled approaches for off-policy RL algorithm design.

AIBullisharXiv – CS AI · Mar 36/103
🧠

Online Causal Kalman Filtering for Stable and Effective Policy Optimization

Researchers propose Online Causal Kalman Filtering for Policy Optimization (KPO) to address high-variance instability in reinforcement learning for large language models. The method uses Kalman filtering to smooth token-level importance sampling ratios, preventing training collapse and achieving superior results on math reasoning tasks.

AIBullisharXiv – CS AI · Mar 27/1026
🧠

RE-PO: Robust Enhanced Policy Optimization as a General Framework for LLM Alignment

Researchers introduce RE-PO (Robust Enhanced Policy Optimization), a new framework that addresses noise in human preference data used to train large language models. The method uses expectation-maximization to identify unreliable labels and reweight training data, improving alignment algorithm performance by up to 7% on benchmarks.

$LINK
Page 1 of 2Next →