Experience Sharing in Mutual Reinforcement Learning for Heterogeneous Language Models
Researchers introduce Mutual Reinforcement Learning, a framework enabling heterogeneous language models to share training experiences while maintaining separate parameters and tokenizers. The system uses three mechanisms—Shared Experience Exchange, Multi-Worker Resource Allocation, and a Tokenizer Heterogeneity Layer—to coordinate reinforcement learning across incompatible model architectures, with outcome-level success transfer showing the best stability-support trade-off.