511 articles tagged with #reinforcement-learning. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.
AIBullisharXiv – CS AI · Mar 36/109
🧠Researchers introduce K²-Agent, a hierarchical AI framework for mobile device control that separates 'know-what' and 'know-how' knowledge to achieve 76.1% success rate on AndroidWorld benchmark. The system uses a high-level reasoner for task planning and low-level executor for skill execution, showing strong generalization across different models and tasks.
AIBullisharXiv – CS AI · Mar 36/103
🧠Researchers propose Online Causal Kalman Filtering for Policy Optimization (KPO) to address high-variance instability in reinforcement learning for large language models. The method uses Kalman filtering to smooth token-level importance sampling ratios, preventing training collapse and achieving superior results on math reasoning tasks.
AIBullisharXiv – CS AI · Mar 37/109
🧠Researchers introduce HiMAC, a hierarchical reinforcement learning framework that improves LLM agent performance on long-horizon tasks by separating macro-level planning from micro-level execution. The approach demonstrates state-of-the-art results across multiple environments, showing that structured hierarchy is more effective than simply scaling model size for complex agent tasks.
AINeutralarXiv – CS AI · Mar 37/108
🧠Researchers have developed DIVA-GRPO, a new reinforcement learning method that improves multimodal large language model reasoning by adaptively adjusting problem difficulty distributions. The approach addresses key limitations in existing group relative policy optimization methods, showing superior performance across six reasoning benchmarks.
AIBullisharXiv – CS AI · Mar 36/1010
🧠Researchers have released DeepResearch-9K, a large-scale dataset with 9,000 questions across three difficulty levels designed to train and benchmark AI research agents. The accompanying open-source framework DeepResearch-R1 supports multi-turn web interactions and reinforcement learning approaches for developing more sophisticated AI research capabilities.
AIBullisharXiv – CS AI · Mar 36/107
🧠Researchers introduce 'bipredictability' as a new metric to monitor reinforcement learning agents in real-world deployments, measuring interaction effectiveness through shared information ratios. The Information Digital Twin (IDT) system detects 89.3% of perturbations versus 44% for traditional reward-based monitoring, with 4.4x faster detection speed.
AIBullisharXiv – CS AI · Mar 37/107
🧠Researchers propose MIST-RL, a reinforcement learning framework that improves AI code generation by creating more efficient test suites. The method achieves 28.5% higher fault detection while using 19.3% fewer test cases, demonstrating significant improvements in AI code verification efficiency.
AIBullisharXiv – CS AI · Mar 37/108
🧠Researchers propose EfficientZero-Multitask (EZ-M), a multi-task model-based reinforcement learning algorithm that scales the number of tasks rather than samples per task for robotics training. The approach achieves state-of-the-art performance on HumanoidBench with significantly higher sample efficiency by leveraging shared world models across diverse tasks.
AINeutralarXiv – CS AI · Mar 37/106
🧠Researchers introduce ProtRLSearch, a multi-round protein search agent that uses reinforcement learning and multimodal inputs (protein sequences and text) to improve protein analysis for healthcare applications. The system addresses limitations of single-round, text-only protein search agents and includes a new benchmark called ProtMCQs with 3,000 multiple choice questions for evaluation.
AIBullisharXiv – CS AI · Mar 37/108
🧠Researchers introduce State-Action Inpainting Diffuser (SAID), a new AI framework that addresses signal delay challenges in continuous control and reinforcement learning. SAID combines model-based and model-free approaches using a generative formulation that can be applied to both online and offline RL, demonstrating state-of-the-art performance on delayed control benchmarks.
AIBullisharXiv – CS AI · Mar 36/108
🧠Researchers introduce Mix-GRM, a new framework for Generative Reward Models that improves AI evaluation by combining breadth and depth reasoning mechanisms. The system achieves 8.2% better performance than leading open-source models by using structured Chain-of-Thought reasoning tailored to specific task types.
AIBullisharXiv – CS AI · Mar 37/108
🧠Researchers introduce CARE, an evidence-grounded agentic framework for medical AI that improves clinical accountability by decomposing tasks into specialized modules rather than using black-box models. The system achieves 10.9% better accuracy than state-of-the-art models by incorporating explicit visual evidence and coordinated reasoning that mimics clinical workflows.
AIBullisharXiv – CS AI · Mar 37/107
🧠Researchers developed ToolRLA, a three-stage reinforcement learning pipeline that significantly improves AI agents' ability to use external tools and APIs for domain-specific tasks. The system achieved 47% higher task completion rates and 93% lower regulatory violations when deployed in a real-world financial advisory copilot serving 80+ advisors with 1,200+ daily queries.
AIBullisharXiv – CS AI · Mar 37/107
🧠Researchers propose Ctrl-R, a new framework that improves large language models' reasoning abilities by systematically discovering and reinforcing diverse reasoning patterns through structured trajectory control. The method enables better exploration of complex reasoning behaviors and shows consistent improvements across mathematical reasoning tasks in both language and vision-language models.
AIBullisharXiv – CS AI · Mar 36/107
🧠Researchers introduce CoVe, a framework for training interactive tool-use AI agents that uses constraint-guided verification to generate high-quality training data. The compact CoVe-4B model achieves competitive performance with models 17 times larger on benchmark tests, with the team open-sourcing code, models, and 12K training trajectories.
AIBullisharXiv – CS AI · Mar 37/107
🧠Researchers introduce T³RL (Tool-Verification for Test-Time Reinforcement Learning), a new method that improves self-evolving AI reasoning models by using external tool verification to prevent incorrect learning from biased consensus. The approach shows significant improvements on mathematical problem-solving tasks, with larger gains on harder problems.
AIBullisharXiv – CS AI · Mar 36/108
🧠Researchers have developed L-REINFORCE, a novel reinforcement learning algorithm that provides probabilistic stability guarantees for control systems using finite data samples. The approach bridges reinforcement learning and control theory by extending classical REINFORCE algorithms with Lyapunov stability methods, demonstrating superior performance in Cartpole simulations.
AIBullisharXiv – CS AI · Mar 36/104
🧠Researchers have developed SwitchMT, a novel methodology using Spiking Neural Networks with adaptive task-switching for multi-task learning in autonomous agents. The approach addresses task interference issues and demonstrates competitive performance in multiple Atari games while maintaining low power consumption and network complexity.
AIBullisharXiv – CS AI · Mar 36/107
🧠Researchers propose RADS (Reachability-Aware Diffusion Steering), a new framework that prevents AI text-to-image models from memorizing training data while maintaining image quality. The method uses reinforcement learning to steer diffusion models away from generating memorized content during inference, offering a plug-and-play solution that doesn't require modifying the underlying model.
AIBullisharXiv – CS AI · Mar 36/108
🧠FlowPortrait is a new reinforcement learning framework that uses Multimodal Large Language Models for evaluation to generate more realistic talking-head videos with better lip synchronization. The system combines human-aligned assessment with policy optimization techniques to address persistent issues in audio-driven portrait animation.
AIBullisharXiv – CS AI · Mar 36/108
🧠Researchers have developed RLShield, a multi-agent reinforcement learning system designed to automate cyber defense in financial institutions. The system uses AI to coordinate real-time responses across multiple assets and services during cyberattacks, balancing containment speed with operational costs and business disruption.
AIBullisharXiv – CS AI · Mar 36/106
🧠Researchers developed SWAP (Step-wise Adaptive Penalization), a new AI training method that makes large reasoning models more efficient by reducing unnecessary steps in chain-of-thought reasoning. The technique reduces reasoning length by 64.3% while improving accuracy by 5.7%, addressing the costly problem of AI models 'overthinking' during problem-solving.
AIBullisharXiv – CS AI · Mar 36/107
🧠HydroShear is a new tactile simulation system for robotics that enables zero-shot sim-to-real transfer of reinforcement learning policies by accurately modeling force, shear, and stick-slip transitions. The system achieved 93% success rate across four dexterous manipulation tasks, significantly outperforming existing vision-based tactile simulation methods.
AIBearisharXiv – CS AI · Mar 37/106
🧠Researchers developed AdvBandit, a new black-box adversarial attack method that can exploit neural contextual bandits by poisoning context data without requiring access to internal model parameters. The attack uses bandit theory and inverse reinforcement learning to adaptively learn victim policies and optimize perturbations, achieving higher victim regret than existing methods.
AIBullisharXiv – CS AI · Mar 36/109
🧠Researchers introduce MM-DeepResearch, a multimodal AI agent that combines visual and textual reasoning for complex research tasks. The system addresses key challenges in multimodal AI through novel training methods including hypergraph-based data generation and offline search engine optimization.