y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto

#membership-inference News & Analysis

5 articles tagged with #membership-inference. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.

5 articles
AIBearisharXiv โ€“ CS AI ยท 5d ago7/10
๐Ÿง 

Noise Aggregation Analysis Driven by Small-Noise Injection: Efficient Membership Inference for Diffusion Models

Researchers have developed a novel membership inference attack against diffusion models that uses noise aggregation analysis and small-noise injection to determine whether specific data samples were included in training datasets. The method significantly reduces computational costs while improving accuracy compared to existing approaches, highlighting emerging privacy vulnerabilities in widely-deployed generative AI systems like Stable Diffusion.

๐Ÿง  Stable Diffusion
AIBearisharXiv โ€“ CS AI ยท Apr 147/10
๐Ÿง 

Powerful Training-Free Membership Inference Against Autoregressive Language Models

Researchers have developed EZ-MIA, a training-free membership inference attack that dramatically improves detection of memorized data in fine-tuned language models by analyzing probability shifts at error positions. The method achieves 3.8x higher detection rates than previous approaches on GPT-2 and demonstrates that privacy risks in fine-tuned models are substantially greater than previously understood.

๐Ÿง  Llama
AINeutralarXiv โ€“ CS AI ยท Mar 177/10
๐Ÿง 

Membership Inference for Contrastive Pre-training Models with Text-only PII Queries

Researchers developed UMID, a new text-only auditing framework to detect if personally identifiable information was memorized during training of multimodal AI models like CLIP and CLAP. The method significantly improves efficiency and effectiveness of membership inference attacks while maintaining privacy constraints.

AIBullisharXiv โ€“ CS AI ยท Mar 167/10
๐Ÿง 

Learnability and Privacy Vulnerability are Entangled in a Few Critical Weights

Researchers discovered that privacy vulnerabilities in neural networks exist in only a small fraction of weights, but these same weights are critical for model performance. They developed a new approach that preserves privacy by rewinding and fine-tuning only these critical weights instead of retraining entire networks, maintaining utility while defending against membership inference attacks.

AIBearisharXiv โ€“ CS AI ยท Mar 97/10
๐Ÿง 

Window-based Membership Inference Attacks Against Fine-tuned Large Language Models

Researchers developed WBC (Window-Based Comparison), a new membership inference attack method that significantly outperforms existing approaches by analyzing localized patterns in Large Language Models rather than global signals. The technique achieves 2-3 times better detection rates and exposes critical privacy vulnerabilities in fine-tuned LLMs through sliding window analysis and binary voting mechanisms.