AIBearisharXiv – CS AI · 7h ago7/10
🧠
Syntax- and Compilation-Preserving Evasion of LLM Vulnerability Detectors
Researchers demonstrate that LLM-based vulnerability detectors, increasingly used in software security pipelines, can be evaded through syntax-preserving code transformations. The study reveals that models with 70%+ accuracy on clean code can fail to detect 87%+ of vulnerabilities when subjected to minor edits, with adversarial attacks achieving up to 92.5% evasion rates—raising serious questions about the reliability of AI-driven security tools in production environments.
🧠 GPT-4