AIBearisharXiv – CS AI · 6h ago6/10
🧠
Self-Consistency Is Losing Its Edge: Diminishing Returns and Rising Costs in Modern LLMs
Researchers demonstrate that self-consistency—a technique where LLMs sample multiple reasoning paths to improve accuracy—delivers diminishing returns on modern models. Testing with Gemini 2.5 shows minimal accuracy gains (0.4-1.6%) while token costs scale linearly, suggesting the technique has become inefficient as model reliability improves.
🧠 Gemini