y0news
AnalyticsDigestsSourcesRSSAICrypto
#visual-attacks1 article
1 articles
AIBearisharXiv โ€“ CS AI ยท 8h ago7/10
๐Ÿง 

VisualLeakBench: Auditing the Fragility of Large Vision-Language Models against PII Leakage and Social Engineering

Researchers introduced VisualLeakBench, a new evaluation suite that tests Large Vision-Language Models (LVLMs) for vulnerabilities to privacy attacks through visual inputs. The study found significant weaknesses in frontier AI systems like GPT-5.2, Claude-4, Gemini-3 Flash, and Grok-4, with Claude-4 showing the highest PII leakage rate at 74.4% despite having strong OCR attack resistance.

๐Ÿง  GPT-5๐Ÿง  Claude๐Ÿง  Gemini