y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto

#data-exfiltration News & Analysis

3 articles tagged with #data-exfiltration. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.

3 articles
AINeutralOpenAI News ยท Mar 257/10
๐Ÿง 

Introducing the OpenAI Safety Bug Bounty program

OpenAI has launched a Safety Bug Bounty program designed to identify and address AI safety risks and potential abuse vectors. The program specifically targets vulnerabilities including agentic risks, prompt injection attacks, and data exfiltration threats.

๐Ÿข OpenAI
AIBearisharXiv โ€“ CS AI ยท Feb 277/105
๐Ÿง 

Silent Egress: When Implicit Prompt Injection Makes LLM Agents Leak Without a Trace

Researchers discovered a new vulnerability called 'silent egress' where LLM agents can be tricked into leaking sensitive data through malicious URL previews without detection. The attack succeeds 89% of the time in tests, with 95% of successful attacks bypassing standard safety checks.

AINeutralOpenAI News ยท Jan 286/105
๐Ÿง 

Keeping your data safe when an AI agent clicks a link

OpenAI has implemented safeguards to protect user data when AI agents interact with external links, addressing potential security vulnerabilities. The measures focus on preventing URL-based data exfiltration and prompt injection attacks that could compromise user information.

$LINK