y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto

#reinforcement-learning News & Analysis

511 articles tagged with #reinforcement-learning. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.

511 articles
AIBullisharXiv – CS AI · Mar 36/109
🧠

K^2-Agent: Co-Evolving Know-What and Know-How for Hierarchical Mobile Device Control

Researchers introduce K²-Agent, a hierarchical AI framework for mobile device control that separates 'know-what' and 'know-how' knowledge to achieve 76.1% success rate on AndroidWorld benchmark. The system uses a high-level reasoner for task planning and low-level executor for skill execution, showing strong generalization across different models and tasks.

AIBullisharXiv – CS AI · Mar 36/103
🧠

Online Causal Kalman Filtering for Stable and Effective Policy Optimization

Researchers propose Online Causal Kalman Filtering for Policy Optimization (KPO) to address high-variance instability in reinforcement learning for large language models. The method uses Kalman filtering to smooth token-level importance sampling ratios, preventing training collapse and achieving superior results on math reasoning tasks.

AIBullisharXiv – CS AI · Mar 37/109
🧠

HiMAC: Hierarchical Macro-Micro Learning for Long-Horizon LLM Agents

Researchers introduce HiMAC, a hierarchical reinforcement learning framework that improves LLM agent performance on long-horizon tasks by separating macro-level planning from micro-level execution. The approach demonstrates state-of-the-art results across multiple environments, showing that structured hierarchy is more effective than simply scaling model size for complex agent tasks.

AINeutralarXiv – CS AI · Mar 37/108
🧠

DIVA-GRPO: Enhancing Multimodal Reasoning through Difficulty-Adaptive Variant Advantage

Researchers have developed DIVA-GRPO, a new reinforcement learning method that improves multimodal large language model reasoning by adaptively adjusting problem difficulty distributions. The approach addresses key limitations in existing group relative policy optimization methods, showing superior performance across six reasoning benchmarks.

AIBullisharXiv – CS AI · Mar 36/1010
🧠

DeepResearch-9K: A Challenging Benchmark Dataset of Deep-Research Agent

Researchers have released DeepResearch-9K, a large-scale dataset with 9,000 questions across three difficulty levels designed to train and benchmark AI research agents. The accompanying open-source framework DeepResearch-R1 supports multi-turn web interactions and reinforcement learning approaches for developing more sophisticated AI research capabilities.

AIBullisharXiv – CS AI · Mar 36/107
🧠

Beyond Reward: A Bounded Measure of Agent Environment Coupling

Researchers introduce 'bipredictability' as a new metric to monitor reinforcement learning agents in real-world deployments, measuring interaction effectiveness through shared information ratios. The Information Digital Twin (IDT) system detects 89.3% of perturbations versus 44% for traditional reward-based monitoring, with 4.4x faster detection speed.

AIBullisharXiv – CS AI · Mar 37/107
🧠

MIST-RL: Mutation-based Incremental Suite Testing via Reinforcement Learning

Researchers propose MIST-RL, a reinforcement learning framework that improves AI code generation by creating more efficient test suites. The method achieves 28.5% higher fault detection while using 19.3% fewer test cases, demonstrating significant improvements in AI code verification efficiency.

AIBullisharXiv – CS AI · Mar 37/108
🧠

Scaling Tasks, Not Samples: Mastering Humanoid Control through Multi-Task Model-Based Reinforcement Learning

Researchers propose EfficientZero-Multitask (EZ-M), a multi-task model-based reinforcement learning algorithm that scales the number of tasks rather than samples per task for robotics training. The approach achieves state-of-the-art performance on HumanoidBench with significantly higher sample efficiency by leveraging shared world models across diverse tasks.

AINeutralarXiv – CS AI · Mar 37/106
🧠

ProtRLSearch: A Multi-Round Multimodal Protein Search Agent with Large Language Models Trained via Reinforcement Learning

Researchers introduce ProtRLSearch, a multi-round protein search agent that uses reinforcement learning and multimodal inputs (protein sequences and text) to improve protein analysis for healthcare applications. The system addresses limitations of single-round, text-only protein search agents and includes a new benchmark called ProtMCQs with 3,000 multiple choice questions for evaluation.

AIBullisharXiv – CS AI · Mar 37/108
🧠

State-Action Inpainting Diffuser for Continuous Control with Delay

Researchers introduce State-Action Inpainting Diffuser (SAID), a new AI framework that addresses signal delay challenges in continuous control and reinforcement learning. SAID combines model-based and model-free approaches using a generative formulation that can be applied to both online and offline RL, demonstrating state-of-the-art performance on delayed control benchmarks.

AIBullisharXiv – CS AI · Mar 36/108
🧠

Beyond Length Scaling: Synergizing Breadth and Depth for Generative Reward Models

Researchers introduce Mix-GRM, a new framework for Generative Reward Models that improves AI evaluation by combining breadth and depth reasoning mechanisms. The system achieves 8.2% better performance than leading open-source models by using structured Chain-of-Thought reasoning tailored to specific task types.

AIBullisharXiv – CS AI · Mar 37/108
🧠

CARE: Towards Clinical Accountability in Multi-Modal Medical Reasoning with an Evidence-Grounded Agentic Framework

Researchers introduce CARE, an evidence-grounded agentic framework for medical AI that improves clinical accountability by decomposing tasks into specialized modules rather than using black-box models. The system achieves 10.9% better accuracy than state-of-the-art models by incorporating explicit visual evidence and coordinated reasoning that mimics clinical workflows.

AIBullisharXiv – CS AI · Mar 37/107
🧠

ToolRLA: Fine-Grained Reward Decomposition for Tool-Integrated Reinforcement Learning Alignment in Domain-Specific Agents

Researchers developed ToolRLA, a three-stage reinforcement learning pipeline that significantly improves AI agents' ability to use external tools and APIs for domain-specific tasks. The system achieved 47% higher task completion rates and 93% lower regulatory violations when deployed in a real-world financial advisory copilot serving 80+ advisors with 1,200+ daily queries.

AIBullisharXiv – CS AI · Mar 37/107
🧠

Learning Structured Reasoning via Tractable Trajectory Control

Researchers propose Ctrl-R, a new framework that improves large language models' reasoning abilities by systematically discovering and reinforcing diverse reasoning patterns through structured trajectory control. The method enables better exploration of complex reasoning behaviors and shows consistent improvements across mathematical reasoning tasks in both language and vision-language models.

AIBullisharXiv – CS AI · Mar 36/107
🧠

CoVe: Training Interactive Tool-Use Agents via Constraint-Guided Verification

Researchers introduce CoVe, a framework for training interactive tool-use AI agents that uses constraint-guided verification to generate high-quality training data. The compact CoVe-4B model achieves competitive performance with models 17 times larger on benchmark tests, with the team open-sourcing code, models, and 12K training trajectories.

AIBullisharXiv – CS AI · Mar 37/107
🧠

Tool Verification for Test-Time Reinforcement Learning

Researchers introduce T³RL (Tool-Verification for Test-Time Reinforcement Learning), a new method that improves self-evolving AI reasoning models by using external tool verification to prevent incorrect learning from biased consensus. The approach shows significant improvements on mathematical problem-solving tasks, with larger gains on harder problems.

AIBullisharXiv – CS AI · Mar 36/108
🧠

Reinforcement Learning for Control with Probabilistic Stability Guarantee: A Finite-Sample Approach

Researchers have developed L-REINFORCE, a novel reinforcement learning algorithm that provides probabilistic stability guarantees for control systems using finite data samples. The approach bridges reinforcement learning and control theory by extending classical REINFORCE algorithms with Lyapunov stability methods, demonstrating superior performance in Cartpole simulations.

AIBullisharXiv – CS AI · Mar 36/104
🧠

Scalable Multi-Task Learning through Spiking Neural Networks with Adaptive Task-Switching Policy for Intelligent Autonomous Agents

Researchers have developed SwitchMT, a novel methodology using Spiking Neural Networks with adaptive task-switching for multi-task learning in autonomous agents. The approach addresses task interference issues and demonstrates competitive performance in multiple Atari games while maintaining low power consumption and network complexity.

AIBullisharXiv – CS AI · Mar 36/107
🧠

Steering Away from Memorization: Reachability-Constrained Reinforcement Learning for Text-to-Image Diffusion

Researchers propose RADS (Reachability-Aware Diffusion Steering), a new framework that prevents AI text-to-image models from memorizing training data while maintaining image quality. The method uses reinforcement learning to steer diffusion models away from generating memorized content during inference, offering a plug-and-play solution that doesn't require modifying the underlying model.

AIBullisharXiv – CS AI · Mar 36/108
🧠

FlowPortrait: Reinforcement Learning for Audio-Driven Portrait Video Generation

FlowPortrait is a new reinforcement learning framework that uses Multimodal Large Language Models for evaluation to generate more realistic talking-head videos with better lip synchronization. The system combines human-aligned assessment with policy optimization techniques to address persistent issues in audio-driven portrait animation.

AIBullisharXiv – CS AI · Mar 36/106
🧠

Stepwise Penalization for Length-Efficient Chain-of-Thought Reasoning

Researchers developed SWAP (Step-wise Adaptive Penalization), a new AI training method that makes large reasoning models more efficient by reducing unnecessary steps in chain-of-thought reasoning. The technique reduces reasoning length by 64.3% while improving accuracy by 5.7%, addressing the costly problem of AI models 'overthinking' during problem-solving.

AIBullisharXiv – CS AI · Mar 36/107
🧠

HydroShear: Hydroelastic Shear Simulation for Tactile Sim-to-Real Reinforcement Learning

HydroShear is a new tactile simulation system for robotics that enables zero-shot sim-to-real transfer of reinforcement learning policies by accurately modeling force, shear, and stick-slip transitions. The system achieved 93% success rate across four dexterous manipulation tasks, significantly outperforming existing vision-based tactile simulation methods.

AIBearisharXiv – CS AI · Mar 37/106
🧠

Learning to Attack: A Bandit Approach to Adversarial Context Poisoning

Researchers developed AdvBandit, a new black-box adversarial attack method that can exploit neural contextual bandits by poisoning context data without requiring access to internal model parameters. The attack uses bandit theory and inverse reinforcement learning to adaptively learn victim policies and optimize perturbations, achieving higher victim regret than existing methods.

AIBullisharXiv – CS AI · Mar 36/109
🧠

MM-DeepResearch: A Simple and Effective Multimodal Agentic Search Baseline

Researchers introduce MM-DeepResearch, a multimodal AI agent that combines visual and textual reasoning for complex research tasks. The system addresses key challenges in multimodal AI through novel training methods including hypergraph-based data generation and offline search engine optimization.

← PrevPage 13 of 21Next →