y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto

#rlvr News & Analysis

8 articles tagged with #rlvr. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.

8 articles
AIBearisharXiv โ€“ CS AI ยท 2d ago7/10
๐Ÿง 

Backdoors in RLVR: Jailbreak Backdoors in LLMs From Verifiable Reward

Researchers have discovered a critical vulnerability in Reinforcement Learning with Verifiable Rewards (RLVR), an emerging training paradigm that enhances LLM reasoning abilities. By injecting less than 2% poisoned data into training sets, attackers can implant backdoors that degrade safety performance by 73% when triggered, without modifying the reward verifier itself.

AINeutralarXiv โ€“ CS AI ยท Mar 127/10
๐Ÿง 

Does LLM Alignment Really Need Diversity? An Empirical Study of Adapting RLVR Methods for Moral Reasoning

A comprehensive study comparing reinforcement learning approaches for AI alignment finds that diversity-seeking algorithms don't outperform reward-maximizing methods in moral reasoning tasks. The research demonstrates that moral reasoning has more concentrated high-reward distributions than mathematical reasoning, making standard optimization methods equally effective without explicit diversity mechanisms.

AINeutralarXiv โ€“ CS AI ยท Mar 57/10
๐Ÿง 

Generalization of RLVR Using Causal Reasoning as a Testbed

Researchers studied reinforcement learning with verifiable rewards (RLVR) for training large language models on causal reasoning tasks, finding it outperforms supervised fine-tuning but only when models have sufficient initial competence. The study used causal graphical models as a testbed and showed RLVR improves specific reasoning subskills like marginalization strategy and probability calculations.

AINeutralarXiv โ€“ CS AI ยท 3d ago6/10
๐Ÿง 

PerMix-RLVR: Preserving Persona Expressivity under Verifiable-Reward Alignment

Researchers introduce PerMix-RLVR, a training method that enables large language models to maintain persona flexibility while preserving task robustness. The approach addresses a fundamental trade-off in reinforcement learning with verifiable rewards, where models become less responsive to persona prompts but gain improved performance on objective tasks.

AIBullisharXiv โ€“ CS AI ยท 6d ago6/10
๐Ÿง 

Rectifying LLM Thought from Lens of Optimization

Researchers introduce RePro, a novel post-training technique that optimizes large language models' reasoning processes by framing chain-of-thought as gradient descent and using process-level rewards to reduce overthinking. The method demonstrates consistent performance improvements across mathematics, science, and coding benchmarks while mitigating inefficient reasoning behaviors in LLMs.

AIBullisharXiv โ€“ CS AI ยท Mar 126/10
๐Ÿง 

CLIPO: Contrastive Learning in Policy Optimization Generalizes RLVR

Researchers introduce CLIPO (Contrastive Learning in Policy Optimization), a new method that improves upon Reinforcement Learning with Verifiable Rewards (RLVR) for training Large Language Models. CLIPO addresses hallucination and answer-copying issues by incorporating contrastive learning to better capture correct reasoning patterns across multiple solution paths.

AIBullisharXiv โ€“ CS AI ยท Mar 36/103
๐Ÿง 

Quantile Advantage Estimation: Stabilizing RLVR for LLM Reasoning

Researchers propose Quantile Advantage Estimation (QAE) to stabilize Reinforcement Learning with Verifiable Rewards (RLVR) for large language model reasoning. The method replaces mean baselines with group-wise K-quantile baselines to prevent entropy collapse and explosion, showing sustained improvements on mathematical reasoning tasks.

AIBullisharXiv โ€“ CS AI ยท Mar 26/1014
๐Ÿง 

Recycling Failures: Salvaging Exploration in RLVR via Fine-Grained Off-Policy Guidance

Researchers propose SCOPE, a new framework for Reinforcement Learning from Verifiable Rewards (RLVR) that improves AI reasoning by salvaging partially correct solutions rather than discarding them entirely. The method achieves 46.6% accuracy on math reasoning tasks and 53.4% on out-of-distribution problems by using step-wise correction to maintain exploration diversity.