AIBearisharXiv โ CS AI ยท 5d ago7/103
๐ง
On The Fragility of Benchmark Contamination Detection in Reasoning Models
New research reveals that benchmark contamination in language reasoning models (LRMs) is extremely difficult to detect, allowing developers to easily inflate performance scores on public leaderboards. The study shows that reinforcement learning methods like GRPO and PPO can effectively conceal contamination signals, undermining the integrity of AI model evaluations.
$NEAR