23 articles tagged with #privacy-preserving. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.
AIBullisharXiv β CS AI Β· 1d ago7/10
π§ Researchers propose Safe-FedLLM, a defense framework addressing security vulnerabilities in federated large language model training by detecting malicious clients through analysis of LoRA update patterns. The lightweight classifier-based approach effectively mitigates attacks while maintaining model performance and training efficiency, representing a significant advancement in securing distributed LLM development.
AINeutralarXiv β CS AI Β· Mar 177/10
π§ Researchers propose group-conditional federated conformal prediction (GC-FCP), a new protocol that enables trustworthy AI uncertainty quantification across distributed clients while providing coverage guarantees for specific groups. The framework addresses challenges in federated learning for applications in healthcare, finance, and mobile sensing by creating compact weighted summaries that support efficient calibration.
AIBullisharXiv β CS AI Β· Mar 177/10
π§ Researchers propose pΒ²RAG, a new privacy-preserving Retrieval-Augmented Generation system that supports arbitrary top-k retrieval while being 3-300x faster than existing solutions. The system uses an interactive bisection method instead of sorting and employs secret sharing across two servers to protect user prompts and database content.
$RAG
AI Γ CryptoBullisharXiv β CS AI Β· Mar 56/10
π€Researchers introduce ZKFL-PQ, a quantum-resistant cryptographic protocol for federated learning in medical AI that combines zero-knowledge proofs, lattice-based encryption, and homomorphic encryption. The protocol achieves 100% rejection of malicious updates while maintaining model accuracy, addressing vulnerabilities from gradient inversion attacks and future quantum threats.
AINeutralarXiv β CS AI Β· Mar 47/105
π§ Researchers introduce Federated Inference (FI), a new collaborative paradigm where independently trained AI models can work together at inference time without sharing data or model parameters. The study identifies key requirements including privacy preservation and performance gains, while highlighting system-level challenges that differ from traditional federated learning approaches.
AIBullisharXiv β CS AI Β· Mar 37/104
π§ BinaryShield is the first privacy-preserving threat intelligence system that enables secure sharing of attack fingerprints across compliance boundaries for LLM services. The system addresses the critical security gap where organizations cannot share prompt injection attack intelligence between services due to privacy regulations, achieving an F1-score of 0.94 while providing 38x faster similarity search than dense embeddings.
AIBullisharXiv β CS AI Β· 2d ago6/10
π§ WebLLM is an open-source JavaScript framework enabling high-performance large language model inference directly in web browsers without cloud servers. Using WebGPU and WebAssembly technologies, it achieves up to 80% of native GPU performance while preserving user privacy through on-device processing.
π’ OpenAI
AINeutralarXiv β CS AI Β· 6d ago6/10
π§ Researchers introduce FedDAP, a federated learning framework that addresses domain shift challenges by constructing domain-specific global prototypes rather than single aggregated prototypes. The method aligns local features with prototypes from the same domain while encouraging separation from different domains, improving model generalization across heterogeneous client data.
AIBullisharXiv β CS AI Β· Apr 76/10
π§ Researchers have developed DP-OPD (Differentially Private On-Policy Distillation), a new framework for training privacy-preserving language models that significantly improves performance over existing methods. The approach simplifies the training pipeline by eliminating the need for DP teacher training and offline synthetic text generation while maintaining strong privacy guarantees.
π’ Perplexity
AIBullisharXiv β CS AI Β· Mar 266/10
π§ Researchers developed PLACID, a privacy-preserving system using small on-device AI models (2B-10B parameters) for clinical acronym disambiguation in healthcare settings. The cascaded approach combines general-purpose models for detection with domain-specific biomedical models, achieving 81% expansion accuracy while keeping sensitive health data local.
AIBullisharXiv β CS AI Β· Mar 96/10
π§ This research survey examines Federated Learning (FL), a distributed machine learning approach that enables collaborative AI model training without centralizing sensitive data. The paper covers FL's technical challenges, privacy mechanisms, and applications across healthcare, finance, and IoT systems.
AI Γ CryptoBullisharXiv β CS AI Β· Mar 37/109
π€Researchers have developed the Agent Economic Sovereignty Protocol (AESP), a new framework that allows AI agents to conduct autonomous financial transactions at machine speed while maintaining human control and governance boundaries. The protocol uses five key mechanisms including policy engines, human oversight, dual-signed commitments, privacy preservation, and cryptographic substrates to ensure agents remain economically capable but never fully sovereign.
AINeutralarXiv β CS AI Β· Mar 36/107
π§ Researchers identify fundamental conflicts between data privacy and data valuation methods used in AI training. The study shows that differential privacy requirements often destroy the fine-grained distinctions needed for effective data valuation, particularly for rare or influential examples.
AIBullisharXiv β CS AI Β· Mar 37/106
π§ Researchers have developed AloePri, the first privacy-preserving LLM inference method designed for industrial applications. The system uses collaborative obfuscation to protect input/output data while maintaining 96.5-100% accuracy and resisting state-of-the-art attacks, successfully tested on a 671B parameter model.
AIBullisharXiv β CS AI Β· Mar 27/1016
π§ Researchers have developed MPU, a privacy-preserving framework that enables machine unlearning for large language models without requiring servers to share parameters or clients to share data. The framework uses perturbed model copies and harmonic denoising to achieve comparable performance to non-private methods, with most algorithms showing less than 1% performance degradation.
AI Γ CryptoBullishHugging Face Blog Β· Nov 176/107
π€The article discusses techniques for performing sentiment analysis on encrypted data using homomorphic encryption. This approach allows analysis of sensitive data while maintaining privacy, potentially enabling new applications in finance and other sectors requiring data confidentiality.
AIBullisharXiv β CS AI Β· Mar 175/10
π§ Researchers developed FedCVR, a privacy-preserving federated learning framework for cardiovascular risk prediction that enables secure collaboration across medical institutions. The system achieved an F1-score of 0.84 and AUC of 0.96 while maintaining differential privacy, demonstrating that server-side adaptive optimization can preserve clinical utility under strict privacy constraints.
AINeutralarXiv β CS AI Β· Mar 175/10
π§ Researchers developed a privacy-preserving method using SHAP entropy regularization to protect sensitive user data in explainable AI systems for smart home IoT applications. The approach reduces privacy leakage while maintaining model accuracy and explanation quality.
AINeutralarXiv β CS AI Β· Mar 44/103
π§ Researchers propose a new Personalized Federated Learning approach that automatically learns optimal collaboration weights between agents without prior knowledge of data heterogeneity. The method uses kernel mean embedding estimation to capture statistical relationships between agents and includes a practical implementation for communication-constrained federated settings.
AINeutralGoogle Research Blog Β· Oct 305/107
π§ The article discusses developments in creating privacy-preserving methods for analyzing AI system usage. This represents ongoing efforts to balance transparency needs with privacy protection in AI deployment and monitoring.
AINeutralGoogle Research Blog Β· Aug 204/108
π§ The article discusses differentially private partition selection, a technique for securing private data at scale. This represents an advancement in privacy-preserving algorithms that can protect sensitive information while still allowing for data analysis and processing.
AIBullisharXiv β CS AI Β· Mar 24/106
π§ Researchers have developed a new framework for privacy-preserving feature selection that uses permutation-invariant representation learning and federated learning techniques. The approach addresses data imbalance and privacy constraints in distributed scenarios while improving computational efficiency and downstream task performance.
AINeutralarXiv β CS AI Β· Mar 24/105
π§ Researchers introduce FedVG, a new federated learning framework that uses gradient-guided aggregation and global validation sets to improve model performance in distributed training environments. The approach addresses client drift issues in heterogeneous data settings and can be integrated with existing federated learning algorithms.