17 articles tagged with #differential-privacy. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.
AIBullisharXiv โ CS AI ยท 3d ago7/10
๐ง Researchers propose RPSG, a novel method for generating synthetic data from private text using large language models while maintaining differential privacy protections. The approach uses private seeds and formal privacy mechanisms during candidate selection, achieving high fidelity synthetic data with stronger privacy guarantees than existing methods.
AIBullisharXiv โ CS AI ยท Mar 117/10
๐ง Researchers have developed a new framework that enables dataset condensation for non-differentiable clinical AI models like decision trees and Cox regression, using differential privacy to create synthetic medical datasets. This breakthrough allows healthcare institutions to share condensed synthetic data while preserving patient privacy and maintaining model utility across classification and survival prediction tasks.
AIBullishGoogle DeepMind Blog ยท Oct 237/104
๐ง VaultGemma represents a breakthrough as the most capable large language model trained from scratch using differential privacy techniques. This development advances privacy-preserving AI by demonstrating that sophisticated models can be built while maintaining strong data protection guarantees.
AINeutralarXiv โ CS AI ยท Apr 106/10
๐ง Researchers introduce Privacy-Preserving Fine-Tuning (PPFT), a novel training approach that enables LLM services to process user queries without receiving raw text, addressing privacy vulnerabilities in current deployments. The method uses client-side encoders and noise-injected embeddings to maintain competitive model performance while eliminating exposure of sensitive personal, medical, or legal information.
AINeutralarXiv โ CS AI ยท Apr 106/10
๐ง Researchers propose AdaProb, a machine unlearning method that enables trained AI models to efficiently forget specific data while preserving privacy and complying with regulations like GDPR. The approach uses adaptive probability distributions and demonstrates 20% improvement in forgetting effectiveness with 50% less computational overhead compared to existing methods.
AIBullisharXiv โ CS AI ยท Apr 76/10
๐ง Researchers have developed DP-OPD (Differentially Private On-Policy Distillation), a new framework for training privacy-preserving language models that significantly improves performance over existing methods. The approach simplifies the training pipeline by eliminating the need for DP teacher training and offline synthetic text generation while maintaining strong privacy guarantees.
๐ข Perplexity
AIBullisharXiv โ CS AI ยท Mar 96/10
๐ง This research survey examines Federated Learning (FL), a distributed machine learning approach that enables collaborative AI model training without centralizing sensitive data. The paper covers FL's technical challenges, privacy mechanisms, and applications across healthcare, finance, and IoT systems.
AIBullisharXiv โ CS AI ยท Mar 66/10
๐ง Researchers introduce DP-MTV, the first framework enabling privacy-preserving multimodal in-context learning for vision-language models using differential privacy. The system allows processing hundreds of demonstrations while maintaining formal privacy guarantees, achieving competitive performance on benchmarks like VizWiz with only minimal accuracy loss.
AINeutralarXiv โ CS AI ยท Mar 36/107
๐ง Researchers identify fundamental conflicts between data privacy and data valuation methods used in AI training. The study shows that differential privacy requirements often destroy the fine-grained distinctions needed for effective data valuation, particularly for rare or influential examples.
AIBullishGoogle Research Blog ยท Dec 106/104
๐ง The article discusses a new differentially private framework designed to analyze AI chatbot usage patterns while protecting user privacy. This approach allows researchers to gain valuable insights into how users interact with AI systems without compromising individual data security.
AIBullishGoogle Research Blog ยท Nov 126/107
๐ง Google researchers have released JAX-Privacy, a framework for implementing differentially private machine learning at scale. The framework enables privacy-preserving ML training while maintaining model performance through advanced algorithmic approaches.
AIBullisharXiv โ CS AI ยท Mar 175/10
๐ง Researchers developed FedCVR, a privacy-preserving federated learning framework for cardiovascular risk prediction that enables secure collaboration across medical institutions. The system achieved an F1-score of 0.84 and AUC of 0.96 while maintaining differential privacy, demonstrating that server-side adaptive optimization can preserve clinical utility under strict privacy constraints.
AINeutralGoogle Research Blog ยท Aug 204/108
๐ง The article discusses differentially private partition selection, a technique for securing private data at scale. This represents an advancement in privacy-preserving algorithms that can protect sensitive information while still allowing for data analysis and processing.
AINeutralGoogle Research Blog ยท May 235/104
๐ง A research paper discusses methods for fine-tuning large language models (LLMs) while implementing user-level differential privacy protections. This algorithmic approach aims to preserve individual user privacy during the model training process while maintaining model performance.
AINeutralGoogle Research Blog ยท May 134/105
๐ง This appears to be a research article focused on differential privacy techniques applied to trust graphs. The article falls under algorithms and theory, suggesting an academic or technical exploration of privacy-preserving methods in graph-based trust systems.
AINeutralarXiv โ CS AI ยท Mar 34/104
๐ง Researchers introduce DP-RGMI, a framework that analyzes how differential privacy affects medical image analysis by decomposing performance degradation into encoder geometry and task-head utilization components. The study across 594,000 chest X-ray images reveals that differential privacy alters representation structure rather than uniformly collapsing features, providing insights for privacy model selection.