y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto

#privacy-preserving News & Analysis

23 articles tagged with #privacy-preserving. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.

23 articles
AIBullisharXiv – CS AI Β· 1d ago7/10
🧠

Safe-FedLLM: Delving into the Safety of Federated Large Language Models

Researchers propose Safe-FedLLM, a defense framework addressing security vulnerabilities in federated large language model training by detecting malicious clients through analysis of LoRA update patterns. The lightweight classifier-based approach effectively mitigates attacks while maintaining model performance and training efficiency, representing a significant advancement in securing distributed LLM development.

AINeutralarXiv – CS AI Β· Mar 177/10
🧠

Efficient Federated Conformal Prediction with Group-Conditional Guarantee

Researchers propose group-conditional federated conformal prediction (GC-FCP), a new protocol that enables trustworthy AI uncertainty quantification across distributed clients while providing coverage guarantees for specific groups. The framework addresses challenges in federated learning for applications in healthcare, finance, and mobile sensing by creating compact weighted summaries that support efficient calibration.

AIBullisharXiv – CS AI Β· Mar 177/10
🧠

$p^2$RAG: Privacy-Preserving RAG Service Supporting Arbitrary Top-$k$ Retrieval

Researchers propose pΒ²RAG, a new privacy-preserving Retrieval-Augmented Generation system that supports arbitrary top-k retrieval while being 3-300x faster than existing solutions. The system uses an interactive bisection method instead of sorting and employs secret sharing across two servers to protect user prompts and database content.

$RAG
AI Γ— CryptoBullisharXiv – CS AI Β· Mar 56/10
πŸ€–

Zero-Knowledge Federated Learning with Lattice-Based Hybrid Encryption for Quantum-Resilient Medical AI

Researchers introduce ZKFL-PQ, a quantum-resistant cryptographic protocol for federated learning in medical AI that combines zero-knowledge proofs, lattice-based encryption, and homomorphic encryption. The protocol achieves 100% rejection of malicious updates while maintaining model accuracy, addressing vulnerabilities from gradient inversion attacks and future quantum threats.

AINeutralarXiv – CS AI Β· Mar 47/105
🧠

Federated Inference: Toward Privacy-Preserving Collaborative and Incentivized Model Serving

Researchers introduce Federated Inference (FI), a new collaborative paradigm where independently trained AI models can work together at inference time without sharing data or model parameters. The study identifies key requirements including privacy preservation and performance gains, while highlighting system-level challenges that differ from traditional federated learning approaches.

AIBullisharXiv – CS AI Β· Mar 37/104
🧠

BinaryShield: Cross-Service Threat Intelligence in LLM Services using Privacy-Preserving Fingerprints

BinaryShield is the first privacy-preserving threat intelligence system that enables secure sharing of attack fingerprints across compliance boundaries for LLM services. The system addresses the critical security gap where organizations cannot share prompt injection attack intelligence between services due to privacy regulations, achieving an F1-score of 0.94 while providing 38x faster similarity search than dense embeddings.

AIBullisharXiv – CS AI Β· 2d ago6/10
🧠

WebLLM: A High-Performance In-Browser LLM Inference Engine

WebLLM is an open-source JavaScript framework enabling high-performance large language model inference directly in web browsers without cloud servers. Using WebGPU and WebAssembly technologies, it achieves up to 80% of native GPU performance while preserving user privacy through on-device processing.

🏒 OpenAI
AINeutralarXiv – CS AI Β· 6d ago6/10
🧠

FedDAP: Domain-Aware Prototype Learning for Federated Learning under Domain Shift

Researchers introduce FedDAP, a federated learning framework that addresses domain shift challenges by constructing domain-specific global prototypes rather than single aggregated prototypes. The method aligns local features with prototypes from the same domain while encouraging separation from different domains, improving model generalization across heterogeneous client data.

AIBullisharXiv – CS AI Β· Apr 76/10
🧠

DP-OPD: Differentially Private On-Policy Distillation for Language Models

Researchers have developed DP-OPD (Differentially Private On-Policy Distillation), a new framework for training privacy-preserving language models that significantly improves performance over existing methods. The approach simplifies the training pipeline by eliminating the need for DP teacher training and offline synthetic text generation while maintaining strong privacy guarantees.

🏒 Perplexity
AIBullisharXiv – CS AI Β· Mar 266/10
🧠

PLACID: Privacy-preserving Large language models for Acronym Clinical Inference and Disambiguation

Researchers developed PLACID, a privacy-preserving system using small on-device AI models (2B-10B parameters) for clinical acronym disambiguation in healthcare settings. The cascaded approach combines general-purpose models for detection with domain-specific biomedical models, achieving 81% expansion accuracy while keeping sensitive health data local.

AIBullisharXiv – CS AI Β· Mar 96/10
🧠

Federated Learning: A Survey on Privacy-Preserving Collaborative Intelligence

This research survey examines Federated Learning (FL), a distributed machine learning approach that enables collaborative AI model training without centralizing sensitive data. The paper covers FL's technical challenges, privacy mechanisms, and applications across healthcare, finance, and IoT systems.

AI Γ— CryptoBullisharXiv – CS AI Β· Mar 37/109
πŸ€–

AESP: A Human-Sovereign Economic Protocol for AI Agents with Privacy-Preserving Settlement

Researchers have developed the Agent Economic Sovereignty Protocol (AESP), a new framework that allows AI agents to conduct autonomous financial transactions at machine speed while maintaining human control and governance boundaries. The protocol uses five key mechanisms including policy engines, human oversight, dual-signed commitments, privacy preservation, and cryptographic substrates to ensure agents remain economically capable but never fully sovereign.

AINeutralarXiv – CS AI Β· Mar 36/107
🧠

Challenges in Enabling Private Data Valuation

Researchers identify fundamental conflicts between data privacy and data valuation methods used in AI training. The study shows that differential privacy requirements often destroy the fine-grained distinctions needed for effective data valuation, particularly for rare or influential examples.

AIBullisharXiv – CS AI Β· Mar 37/106
🧠

Towards Privacy-Preserving LLM Inference via Collaborative Obfuscation (Technical Report)

Researchers have developed AloePri, the first privacy-preserving LLM inference method designed for industrial applications. The system uses collaborative obfuscation to protect input/output data while maintaining 96.5-100% accuracy and resisting state-of-the-art attacks, successfully tested on a 671B parameter model.

AIBullisharXiv – CS AI Β· Mar 27/1016
🧠

MPU: Towards Secure and Privacy-Preserving Knowledge Unlearning for Large Language Models

Researchers have developed MPU, a privacy-preserving framework that enables machine unlearning for large language models without requiring servers to share parameters or clients to share data. The framework uses perturbed model copies and harmonic denoising to achieve comparable performance to non-private methods, with most algorithms showing less than 1% performance degradation.

AI Γ— CryptoBullishHugging Face Blog Β· Nov 176/107
πŸ€–

Sentiment Analysis on Encrypted Data with Homomorphic Encryption

The article discusses techniques for performing sentiment analysis on encrypted data using homomorphic encryption. This approach allows analysis of sensitive data while maintaining privacy, potentially enabling new applications in finance and other sectors requiring data confidentiality.

AIBullisharXiv – CS AI Β· Mar 175/10
🧠

A Robust Framework for Secure Cardiovascular Risk Prediction: An Architectural Case Study of Differentially Private Federated Learning

Researchers developed FedCVR, a privacy-preserving federated learning framework for cardiovascular risk prediction that enables secure collaboration across medical institutions. The system achieved an F1-score of 0.84 and AUC of 0.96 while maintaining differential privacy, demonstrating that server-side adaptive optimization can preserve clinical utility under strict privacy constraints.

AINeutralarXiv – CS AI Β· Mar 175/10
🧠

Privacy-Preserving Explainable AIoT Application via SHAP Entropy Regularization

Researchers developed a privacy-preserving method using SHAP entropy regularization to protect sensitive user data in explainable AI systems for smart home IoT applications. The approach reduces privacy leakage while maintaining model accuracy and explanation quality.

AINeutralarXiv – CS AI Β· Mar 44/103
🧠

Adaptive Personalized Federated Learning via Multi-task Averaging of Kernel Mean Embeddings

Researchers propose a new Personalized Federated Learning approach that automatically learns optimal collaboration weights between agents without prior knowledge of data heterogeneity. The method uses kernel mean embedding estimation to capture statistical relationships between agents and includes a practical implementation for communication-constrained federated settings.

AINeutralGoogle Research Blog Β· Oct 305/107
🧠

Toward provably private insights into AI use

The article discusses developments in creating privacy-preserving methods for analyzing AI system usage. This represents ongoing efforts to balance transparency needs with privacy protection in AI deployment and monitoring.

AINeutralGoogle Research Blog Β· Aug 204/108
🧠

Securing private data at scale with differentially private partition selection

The article discusses differentially private partition selection, a technique for securing private data at scale. This represents an advancement in privacy-preserving algorithms that can protect sensitive information while still allowing for data analysis and processing.

AINeutralarXiv – CS AI Β· Mar 24/105
🧠

FedVG: Gradient-Guided Aggregation for Enhanced Federated Learning

Researchers introduce FedVG, a new federated learning framework that uses gradient-guided aggregation and global validation sets to improve model performance in distributed training environments. The approach addresses client drift issues in heterogeneous data settings and can be integrated with existing federated learning algorithms.