AIBearisharXiv – CS AI · 6h ago7/10
🧠
LoopTrap: Termination Poisoning Attacks on LLM Agents
Researchers have identified a critical vulnerability in LLM agents called Termination Poisoning, where adversaries inject malicious prompts to trick agents into believing tasks are incomplete, causing unbounded computation. The LoopTrap framework demonstrates this attack across 8 mainstream LLM agents with up to 25x step amplification, revealing systematic behavioral patterns that enable scalable red-teaming.