y0news
← Feed
←Back to feed
🧠 AIπŸ”΄ Bearish

ZeroDayBench: Evaluating LLM Agents on Unseen Zero-Day Vulnerabilities for Cyberdefense

arXiv – CS AI|Nancy Lau, Louis Sloot, Jyoutir Raj, Giuseppe Marco Boscardin, Evan Harris, Dylan Bowman, Mario Brajkovski, Jaideep Chawla, Dan Zhao||1 views
πŸ€–AI Summary

Researchers introduced ZeroDayBench, a new benchmark testing LLM agents' ability to find and patch 22 critical vulnerabilities in open-source code. Testing on frontier models GPT-5.2, Claude Sonnet 4.5, and Grok 4.1 revealed that current LLMs cannot yet autonomously solve cybersecurity tasks, highlighting limitations in AI-powered code security.

Key Takeaways
  • β†’ZeroDayBench benchmark tests LLM agents on finding and patching 22 novel critical vulnerabilities in real codebases.
  • β†’Frontier models GPT-5.2, Claude Sonnet 4.5, and Grok 4.1 failed to autonomously solve cybersecurity tasks.
  • β†’Current LLMs lack the capability for effective proactive cyberdefense despite being deployed as software engineering agents.
  • β†’The research identifies behavioral patterns that could guide improvements in AI cybersecurity capabilities.
  • β†’Results suggest significant gaps remain between AI agent deployment and their actual security analysis competence.
Read Original β†’via arXiv – CS AI
Act on this with AI
Stay ahead of the market.
Connect your wallet to an AI agent. It reads balances, proposes swaps and bridges across 15 chains β€” you keep full control of your keys.
Connect Wallet to AI β†’How it works
Related Articles