y0news
← Feed
Back to feed
🧠 AI NeutralImportance 7/10

Membership Inference for Contrastive Pre-training Models with Text-only PII Queries

arXiv – CS AI|Ruoxi Cheng, Yizhong Ding, Hongyi Zhang, Yiyan Huang|
🤖AI Summary

Researchers developed UMID, a new text-only auditing framework to detect if personally identifiable information was memorized during training of multimodal AI models like CLIP and CLAP. The method significantly improves efficiency and effectiveness of membership inference attacks while maintaining privacy constraints.

Key Takeaways
  • UMID enables privacy auditing of large multimodal models using only text queries, avoiding direct exposure of sensitive biometric data.
  • The framework performs text-guided cross-modal latent inversion to extract similarity and variability signals for detection.
  • UMID delivers strong detection performance with sub-second auditing costs, making it practically viable for large-scale models.
  • Traditional shadow-model membership inference attacks are computationally prohibitive for large multimodal backbones.
  • The research addresses growing concerns about web-scale trained models memorizing personally identifiable information.
Read Original →via arXiv – CS AI
Act on this with AI
Stay ahead of the market.
Connect your wallet to an AI agent. It reads balances, proposes swaps and bridges across 15 chains — you keep full control of your keys.
Connect Wallet to AI →How it works
Related Articles