AINeutralarXiv โ CS AI ยท 10h ago4/10
๐ง
Can LLMs Model Incorrect Student Reasoning? A Case Study on Distractor Generation
Research from arXiv examines how large language models generate multiple-choice distractors for educational assessments by modeling incorrect student reasoning. The study finds LLMs surprisingly align with educational best practices, first solving problems correctly then simulating misconceptions, with failures primarily occurring in solution recovery and candidate selection rather than error simulation.