y0news
AnalyticsDigestsSourcesRSSAICrypto
#data-exfiltration2 articles
2 articles
AIBearisharXiv โ€“ CS AI ยท Feb 277/105
๐Ÿง 

Silent Egress: When Implicit Prompt Injection Makes LLM Agents Leak Without a Trace

Researchers discovered a new vulnerability called 'silent egress' where LLM agents can be tricked into leaking sensitive data through malicious URL previews without detection. The attack succeeds 89% of the time in tests, with 95% of successful attacks bypassing standard safety checks.

AINeutralOpenAI News ยท Jan 286/105
๐Ÿง 

Keeping your data safe when an AI agent clicks a link

OpenAI has implemented safeguards to protect user data when AI agents interact with external links, addressing potential security vulnerabilities. The measures focus on preventing URL-based data exfiltration and prompt injection attacks that could compromise user information.

$LINK