AIBearisharXiv โ CS AI ยท 7h ago6/10
๐ง
Where does output diversity collapse in post-training?
Researchers discover that post-trained language models experience systematic output diversity collapse, where fine-tuning methods reduce the variety of generated responses compared to base models. This collapse is determined during training by data composition choices and cannot be fixed through inference-time adjustments, with implications for scaling methods and creative AI applications.