15 articles tagged with #grpo. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.
AIBullisharXiv – CS AI · Mar 117/10
🧠Researchers introduce Stepwise Guided Policy Optimization (SGPO), a new framework that improves upon Group Relative Policy Optimization (GRPO) by learning from incorrect reasoning responses in large language model training. SGPO addresses the limitation where GRPO fails to update policies when all responses in a group are incorrect, showing improved performance across multiple model sizes and reasoning benchmarks.
AIBullisharXiv – CS AI · Mar 46/105
🧠Researchers developed a three-stage curriculum learning framework that improves Chain-of-Thought reasoning distillation from large language models to smaller ones. The method enables Qwen2.5-3B-Base to achieve 11.29% accuracy improvement while reducing output length by 27.4% through progressive skill acquisition and Group Relative Policy Optimization.
AIBullisharXiv – CS AI · Mar 37/103
🧠Researchers introduced Scaf-GRPO, a new training framework that overcomes the 'learning cliff' problem in LLM reasoning by providing strategic hints when models plateau. The method boosted Qwen2.5-Math-7B performance on the AIME24 benchmark by 44.3% relative to baseline GRPO methods.
AIBullishSynced Review · Apr 247/105
🧠Kwai AI has developed SRPO, a new reinforcement learning framework that reduces LLM post-training steps by 90% while achieving performance comparable to DeepSeek-R1 in mathematics and coding tasks. The two-stage approach with history resampling addresses efficiency limitations in existing GRPO methods.
AIBullisharXiv – CS AI · Mar 176/10
🧠Researchers propose GRPO (Group Relative Policy Optimization) combined with reflection reward mechanisms to enhance mathematical reasoning in large language models. The four-stage framework encourages self-reflective capabilities during training and demonstrates state-of-the-art performance over existing methods like supervised fine-tuning and LoRA.
AIBullisharXiv – CS AI · Mar 176/10
🧠Researchers introduce SmoothVLA, a new reinforcement learning framework that improves robot control by optimizing both task performance and motion smoothness. The system addresses the trade-off between stability and exploration in Vision-Language-Action models, achieving 13.8% better smoothness than standard RL methods.
AIBullisharXiv – CS AI · Mar 166/10
🧠Researchers developed a new reinforcement learning framework using Group Relative Policy Optimization (GRPO) to make Large Language Models provide consistent recommendations across semantically equivalent prompts. The method addresses a critical enterprise need for reliable AI systems in business domains like finance and customer support, where inconsistent responses undermine trust and compliance.
AINeutralarXiv – CS AI · Mar 45/103
🧠Researchers propose ShipTraj-R1, a novel LLM-based framework using group relative policy optimization (GRPO) for ship trajectory prediction. The system reformulates trajectory prediction as a text-to-text generation problem and demonstrates superior performance compared to existing deep learning baselines on real-world maritime datasets.
AINeutralarXiv – CS AI · Mar 37/108
🧠Researchers have developed DIVA-GRPO, a new reinforcement learning method that improves multimodal large language model reasoning by adaptively adjusting problem difficulty distributions. The approach addresses key limitations in existing group relative policy optimization methods, showing superior performance across six reasoning benchmarks.
AIBullisharXiv – CS AI · Mar 37/107
🧠Researchers propose MIST-RL, a reinforcement learning framework that improves AI code generation by creating more efficient test suites. The method achieves 28.5% higher fault detection while using 19.3% fewer test cases, demonstrating significant improvements in AI code verification efficiency.
AIBullisharXiv – CS AI · Mar 36/107
🧠Researchers introduce Dr. Seg, a new framework that improves Group Relative Policy Optimization (GRPO) training for Visual Large Language Models by addressing key differences between language reasoning and visual perception tasks. The framework includes a Look-to-Confirm mechanism and Distribution-Ranked Reward module that enhance performance in complex visual scenarios without requiring architectural changes.
AIBullisharXiv – CS AI · Mar 36/104
🧠Researchers demonstrate that Group Relative Policy Optimization (GRPO), traditionally viewed as an on-policy reinforcement learning algorithm, can be reinterpreted as an off-policy algorithm through first-principles analysis. This theoretical breakthrough provides new insights for optimizing reinforcement learning applications in large language models and offers principled approaches for off-policy RL algorithm design.
AIBullisharXiv – CS AI · Feb 276/105
🧠Researchers introduced NoRD (No Reasoning for Driving), a Vision-Language-Action model for autonomous driving that achieves competitive performance using 60% less training data and no reasoning annotations. The model incorporates Dr. GRPO algorithm to overcome difficulty bias issues in reinforcement learning, demonstrating successful results on Waymo and NAVSIM benchmarks.
AINeutralarXiv – CS AI · Feb 275/108
🧠Researchers introduce Soft Sequence Policy Optimization (SSPO), a new reinforcement learning method for training Large Language Models that improves upon existing policy optimization approaches. The technique uses soft gating functions and sequence-level importance sampling to enhance training stability and performance in mathematical reasoning tasks.
AINeutralHugging Face Blog · May 253/105
🧠The article appears to be about Liger GRPO (Generalized Reward Preference Optimization) integrating with TRL (Transformer Reinforcement Learning), but the article body is empty. Without content, this seems to be a technical development in AI model training and optimization.