AIBearisharXiv – CS AI · 9h ago7/10
🧠
Post-training makes large language models less human-like
Researchers introduced Psych-201, a dataset measuring how well large language models align with human behavior, and discovered that post-training—the process that makes base models into functional assistants—systematically reduces their human-likeness across all model families and sizes. This misalignment worsens with newer generations despite improvements in base model capabilities, suggesting that the optimization techniques making LLMs more useful for deployment make them worse at mimicking actual human behavior.