y0news
← Feed
←Back to feed
🧠 AIβšͺ Neutral

Engineering Reasoning and Instruction (ERI) Benchmark: A Large Taxonomy-driven Dataset for Foundation Models and Agents

arXiv – CS AI|MZ Naser, Ahmad Bani Awwad, Zoie McCreery, Radwa Eissa, Ahmad Naser, Gianluca Cusatis, Andrew Metcalf, Kapil Madathil, Jamal Abdalla, Venkatesh Kodur, Mohammad Reza Saeb||1 views
πŸ€–AI Summary

Researchers released the ERI benchmark, a comprehensive dataset spanning 9 engineering fields and 55 subdomains to evaluate large language models' engineering capabilities. The benchmark tested 7 LLMs across 57,750 records, revealing a clear three-tier performance structure with frontier models like GPT-5 and Claude Sonnet 4 significantly outperforming mid-tier and smaller models.

Key Takeaways
  • β†’ERI benchmark covers 9 engineering fields with 57,750 records across undergraduate, graduate, and professional difficulty levels.
  • β†’Frontier models (GPT-5, Claude Sonnet 4, DeepSeek V3.1) achieved mean scores above 4.30 on a five-point scale.
  • β†’Mid-tier and smaller models showed progressively higher failure rates on graduate-level engineering questions.
  • β†’Researchers developed a validation protocol that bounds hallucination risk to 1.7% through cross-provider independence.
  • β†’The dataset is publicly released with evaluation tools to enable reproducible AI model testing in engineering applications.
Read Original β†’via arXiv – CS AI
Act on this with AI
Stay ahead of the market.
Connect your wallet to an AI agent. It reads balances, proposes swaps and bridges across 15 chains β€” you keep full control of your keys.
Connect Wallet to AI β†’How it works
Related Articles