y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto

#data-privacy News & Analysis

28 articles tagged with #data-privacy. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.

28 articles
AIBearisharXiv โ€“ CS AI ยท 2d ago7/10
๐Ÿง 

Powerful Training-Free Membership Inference Against Autoregressive Language Models

Researchers have developed EZ-MIA, a training-free membership inference attack that dramatically improves detection of memorized data in fine-tuned language models by analyzing probability shifts at error positions. The method achieves 3.8x higher detection rates than previous approaches on GPT-2 and demonstrates that privacy risks in fine-tuned models are substantially greater than previously understood.

๐Ÿง  Llama
AI ร— CryptoBullishCrypto Briefing ยท 5d ago7/10
๐Ÿค–

Illia Polosukhin: Traditional AI services expose sensitive data, crypto simplifies global payments, and AI will redefine computing interfaces | Bankless

Illia Polosukhin argues that AI will fundamentally reshape computing interfaces, potentially obsoleting traditional operating systems, while blockchain technology provides the security layer necessary for this integration. He contends that traditional AI services expose user data vulnerabilities, whereas cryptocurrency enables more secure global payments and decentralized infrastructure.

Illia Polosukhin: Traditional AI services expose sensitive data, crypto simplifies global payments, and AI will redefine computing interfaces | Bankless
AIBullisharXiv โ€“ CS AI ยท Mar 56/10
๐Ÿง 

PRIVATEEDIT: A Privacy-Preserving Pipeline for Face-Centric Generative Image Editing

Researchers have developed PRIVATEEDIT, a privacy-preserving pipeline for face-centric image editing that keeps biometric data on-device rather than uploading to third-party services. The system uses local segmentation and masking to separate identity-sensitive regions from editable content, allowing high-quality editing while maintaining user control over facial data.

AINeutralarXiv โ€“ CS AI ยท Mar 57/10
๐Ÿง 

Why Do Unlearnable Examples Work: A Novel Perspective of Mutual Information

Researchers propose a new method called Mutual Information Unlearnable Examples (MI-UE) to protect data privacy by preventing unauthorized AI models from learning from scraped data. The approach uses mutual information theory to create more effective data poisoning techniques that impede deep learning model generalization.

AIBearishDecrypt โ€“ AI ยท Mar 57/10
๐Ÿง 

Inside the Ray-Ban Smart Glasses Controversy Plaguing Meta

Meta's Ray-Ban smart glasses are under investigation due to privacy concerns regarding the collection and use of sensitive footage. Regulators and privacy advocates are raising significant concerns about the potential misuse of data captured through the wearable technology.

Inside the Ray-Ban Smart Glasses Controversy Plaguing Meta
AIBearishArs Technica โ€“ AI ยท Feb 237/106
๐Ÿง 

AIs can generate near-verbatim copies of novels from training data

Research reveals that large language models (LLMs) can reproduce near-exact copies of novels and other content from their training datasets, indicating these AI systems memorize significantly more training data than previously understood. This discovery raises important concerns about copyright infringement, data privacy, and the extent of memorization in AI training processes.

$NEAR
AINeutralarXiv โ€“ CS AI ยท 3d ago6/10
๐Ÿง 

TRU: Targeted Reverse Update for Efficient Multimodal Recommendation Unlearning

Researchers propose TRU (Targeted Reverse Update), a machine unlearning framework designed to efficiently remove user data from multimodal recommendation systems without full retraining. The method addresses non-uniform data influence across ranking behavior, modality branches, and network layers through coordinated interventions, achieving better performance than existing approximate unlearning approaches.

AIBearishCrypto Briefing ยท 5d ago7/10
๐Ÿง 

Mark Suman: AI systems can understand human thought patterns better than we do, the rapid pace of AI development outstrips ethical considerations, and the opacity of AI companies raises serious privacy concerns | The Peter McCormack Show

Mark Suman discusses concerns that AI systems may understand human thought patterns better than humans themselves understand them, while the rapid pace of AI development outpaces ethical frameworks and regulatory considerations. The opacity of AI companies raises significant privacy concerns that demand urgent attention from policymakers and industry stakeholders.

Mark Suman: AI systems can understand human thought patterns better than we do, the rapid pace of AI development outstrips ethical considerations, and the opacity of AI companies raises serious privacy concerns | The Peter McCormack Show
AINeutralarXiv โ€“ CS AI ยท 6d ago6/10
๐Ÿง 

Machine Unlearning in the Era of Quantum Machine Learning: An Empirical Study

Researchers present the first empirical study of machine unlearning in hybrid quantum-classical neural networks, adapting classical unlearning methods to quantum settings and introducing quantum-specific strategies. The study reveals that quantum models can effectively support unlearning, with performance varying based on circuit depth and entanglement structure, establishing baseline insights for privacy-preserving quantum machine learning systems.

AIBullisharXiv โ€“ CS AI ยท Mar 266/10
๐Ÿง 

PLACID: Privacy-preserving Large language models for Acronym Clinical Inference and Disambiguation

Researchers developed PLACID, a privacy-preserving system using small on-device AI models (2B-10B parameters) for clinical acronym disambiguation in healthcare settings. The cascaded approach combines general-purpose models for detection with domain-specific biomedical models, achieving 81% expansion accuracy while keeping sensitive health data local.

AINeutralarXiv โ€“ CS AI ยท Mar 176/10
๐Ÿง 

PMAx: An Agentic Framework for AI-Driven Process Mining

Researchers have developed PMAx, an autonomous AI framework that democratizes process mining by allowing business users to analyze organizational workflows through natural language queries. The system uses a multi-agent architecture with local execution to ensure data privacy and mathematical accuracy while eliminating the need for specialized technical expertise.

AIBullisharXiv โ€“ CS AI ยท Mar 176/10
๐Ÿง 

Computation and Communication Efficient Federated Unlearning via On-server Gradient Conflict Mitigation and Expression

Researchers propose FOUL (Federated On-server Unlearning), a new framework for efficiently removing specific participants' data from federated learning models without accessing client data. The approach reduces computational and communication costs while maintaining privacy compliance through a two-stage process that performs unlearning operations on the server side.

AIBullisharXiv โ€“ CS AI ยท Mar 166/10
๐Ÿง 

Stake the Points: Structure-Faithful Instance Unlearning

Researchers propose a new "structure-faithful" framework for machine unlearning that preserves semantic relationships in AI models while removing specific data. The method uses semantic anchors to maintain knowledge structure, showing significant performance improvements of 19-33% across image classification, retrieval, and face recognition tasks.

AIBullisharXiv โ€“ CS AI ยท Mar 96/10
๐Ÿง 

Federated Learning: A Survey on Privacy-Preserving Collaborative Intelligence

This research survey examines Federated Learning (FL), a distributed machine learning approach that enables collaborative AI model training without centralizing sensitive data. The paper covers FL's technical challenges, privacy mechanisms, and applications across healthcare, finance, and IoT systems.

AIBearishDecrypt โ€“ AI ยท Mar 46/101
๐Ÿง 

Before You Quit ChatGPT, Do This to Take Your Data With You

The 'QuitGPT' movement has reached 2.5 million pledges as users move away from ChatGPT. The article provides guidance on how users can export and preserve their data before deleting their ChatGPT accounts.

Before You Quit ChatGPT, Do This to Take Your Data With You
AINeutralarXiv โ€“ CS AI ยท Mar 36/107
๐Ÿง 

Challenges in Enabling Private Data Valuation

Researchers identify fundamental conflicts between data privacy and data valuation methods used in AI training. The study shows that differential privacy requirements often destroy the fine-grained distinctions needed for effective data valuation, particularly for rare or influential examples.

AIBullisharXiv โ€“ CS AI ยท Mar 37/107
๐Ÿง 

ROKA: Robust Knowledge Unlearning against Adversaries

Researchers introduce ROKA, a new machine unlearning method that prevents knowledge contamination and indirect attacks on AI models. The approach uses 'Neural Healing' to preserve important knowledge while forgetting targeted data, providing theoretical guarantees for knowledge preservation during unlearning.

AIBearisharXiv โ€“ CS AI ยท Mar 37/106
๐Ÿง 

Turning Black Box into White Box: Dataset Distillation Leaks

Researchers discovered that dataset distillation, a technique for compressing large datasets into smaller synthetic ones, has serious privacy vulnerabilities. The study introduces an Information Revelation Attack (IRA) that can extract sensitive information from synthetic datasets, including predicting the distillation algorithm, model architecture, and recovering original training samples.

AIBullisharXiv โ€“ CS AI ยท Mar 36/104
๐Ÿง 

A Contemporary Overview: Trends and Applications of Large Language Models on Mobile Devices

Large language models (LLMs) are increasingly being deployed on mobile devices, enabling applications like voice assistants, real-time translation, and intelligent recommendations. Advancements in hardware and 5G infrastructure allow for efficient local inference while improving data privacy and reducing cloud dependency.

AINeutralarXiv โ€“ CS AI ยท Mar 36/103
๐Ÿง 

Toward Youth-Centered Privacy-by-Design in Smart Devices: A Systematic Review

A systematic review of 122 academic papers reveals significant gaps in privacy protection for youth using AI-enabled smart devices, with technical solutions dominating research (67%) while policy enforcement and educational integration remain underdeveloped. The study recommends a multi-stakeholder approach involving policymakers, manufacturers, and educators to create comprehensive privacy ecosystems for young users.

AINeutralIEEE Spectrum โ€“ AI ยท Feb 116/107
๐Ÿง 

How Do You Define an AI Companion?

AI companions are becoming increasingly popular as millions of users develop relationships with chatbots for emotional support rather than just utility. Researcher Jaime Banks defines AI companionship as sustained, positive relationships between humans and machines that are valued for their own sake, though this definition is evolving as people find both emotional and practical value in these interactions.

AIBullishGoogle Research Blog ยท Jul 246/107
๐Ÿง 

Synthetic and federated: Privacy-preserving domain adaptation with LLMs for mobile applications

The article discusses privacy-preserving domain adaptation techniques using Large Language Models for mobile applications, combining synthetic data generation with federated learning approaches. This represents an advancement in AI privacy technology that could enable better model performance while protecting user data in mobile environments.

AINeutralOpenAI News ยท Apr 255/104
๐Ÿง 

New ways to manage your data in ChatGPT

ChatGPT now allows users to turn off chat history, giving them control over which conversations can be used to train OpenAI's models. This represents a significant privacy enhancement for the popular AI chatbot platform.

AINeutralarXiv โ€“ CS AI ยท Mar 175/10
๐Ÿง 

Privacy-Preserving Explainable AIoT Application via SHAP Entropy Regularization

Researchers developed a privacy-preserving method using SHAP entropy regularization to protect sensitive user data in explainable AI systems for smart home IoT applications. The approach reduces privacy leakage while maintaining model accuracy and explanation quality.

AINeutralarXiv โ€“ CS AI ยท Feb 274/105
๐Ÿง 

Generative Agents Navigating Digital Libraries

Researchers have developed Agent4DL, a new AI-powered simulator that generates realistic user search behavior patterns for digital libraries using large language models. The system addresses privacy-related data scarcity issues by creating synthetic user profiles and search sessions that closely mimic real user interactions, showing competitive performance against existing simulators like SimIIR 2.0.

Page 1 of 2Next โ†’