y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto

#data-protection News & Analysis

32 articles tagged with #data-protection. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.

32 articles
AI × CryptoBullisharXiv – CS AI · 1d ago7/10
🤖

Hardening x402: PII-Safe Agentic Payments via Pre-Execution Metadata Filtering

Researchers have developed presidio-hardened-x402, an open-source middleware that filters personally identifiable information from AI agent payment requests using the x402 protocol before data reaches payment servers or centralized APIs. The tool achieves 97.2% precision in detecting PII with minimal latency, addressing a critical privacy gap where payment metadata is currently transmitted without data processing agreements.

AIBullisharXiv – CS AI · 1d ago7/10
🧠

Private Seeds, Public LLMs: Realistic and Privacy-Preserving Synthetic Data Generation

Researchers propose RPSG, a novel method for generating synthetic data from private text using large language models while maintaining differential privacy protections. The approach uses private seeds and formal privacy mechanisms during candidate selection, achieving high fidelity synthetic data with stronger privacy guarantees than existing methods.

AIBullisharXiv – CS AI · Apr 67/10
🧠

Opal: Private Memory for Personal AI

Researchers present Opal, a private memory system for personal AI that uses trusted hardware enclaves and oblivious RAM to protect user data privacy while maintaining query accuracy. The system achieves 13 percentage point improvement in retrieval accuracy over semantic search and 29x higher throughput with 15x lower costs than secure baselines.

AINeutralarXiv – CS AI · Mar 47/102
🧠

WARP: Weight Teleportation for Attack-Resilient Unlearning Protocols

Researchers introduce WARP, a new defense mechanism for machine unlearning protocols that protects against privacy attacks where adversaries can exploit differences between pre- and post-unlearning AI models. The technique reduces attack success rates by up to 92% while maintaining model accuracy on retained data.

AIBearisharXiv – CS AI · Mar 37/103
🧠

Multi-PA: A Multi-perspective Benchmark on Privacy Assessment for Large Vision-Language Models

Researchers introduce Multi-PA, a comprehensive benchmark for evaluating privacy risks in Large Vision-Language Models (LVLMs), covering 26 personal privacy categories, 15 trade secrets, and 18 state secrets across 31,962 samples. Testing 21 open-source and 2 closed-source LVLMs revealed significant privacy vulnerabilities, with models generally posing high risks of facilitating privacy breaches across different privacy categories.

AIBullisharXiv – CS AI · Mar 37/102
🧠

Model Collapse Is Not a Bug but a Feature in Machine Unlearning for LLMs

Researchers propose Partial Model Collapse (PMC), a novel machine unlearning method for large language models that removes private information without directly training on sensitive data. The approach leverages model collapse - where models degrade when trained on their own outputs - as a feature to deliberately forget targeted information while preserving general utility.

AIBearisharXiv – CS AI · Mar 37/104
🧠

AudAgent: Automated Auditing of Privacy Policy Compliance in AI Agents

Researchers have developed AudAgent, an automated tool that monitors AI agents in real-time to ensure they comply with their stated privacy policies. The tool revealed that many AI agents powered by major providers like Claude, Gemini, and DeepSeek fail to protect highly sensitive data like SSNs and violate their own privacy policies.

$LINK
AIBearisharXiv – CS AI · Feb 277/107
🧠

Large-scale online deanonymization with LLMs

Researchers demonstrate that large language models can successfully deanonymize pseudonymous users across online platforms at scale, achieving up to 68% recall at 90% precision. The study shows LLMs can match users between platforms like Hacker News and LinkedIn, or across Reddit communities, using only unstructured text data.

$NEAR
AINeutralOpenAI News · Nov 127/106
🧠

Fighting the New York Times’ invasion of user privacy

OpenAI is resisting the New York Times' request for access to 20 million private ChatGPT conversations, while simultaneously implementing enhanced security and privacy protections for user data. This legal dispute highlights growing tensions over data privacy and corporate access to AI conversation logs.

AIBullishGoogle DeepMind Blog · Oct 237/104
🧠

VaultGemma: The world's most capable differentially private LLM

VaultGemma represents a breakthrough as the most capable large language model trained from scratch using differential privacy techniques. This development advances privacy-preserving AI by demonstrating that sophisticated models can be built while maintaining strong data protection guarantees.

AI × CryptoBullishHugging Face Blog · Aug 27/106
🤖

Towards Encrypted Large Language Models with FHE

The article discusses the development of encrypted large language models using Fully Homomorphic Encryption (FHE) technology. This approach would allow AI models to process data while keeping it encrypted, potentially addressing privacy concerns in AI applications.

AIBullishAI News · 7h ago6/10
🧠

Commvault launches a ‘Ctrl-Z’ for cloud AI workloads

Commvault has launched AI Protect, a governance solution that provides rollback capabilities for autonomous AI agents operating in cloud environments. The platform addresses critical risks posed by AI systems that can independently delete files, access databases, modify infrastructure, and alter security policies without adequate oversight or recovery mechanisms.

AINeutralarXiv – CS AI · 5d ago6/10
🧠

Negotiating Privacy with Smart Voice Assistants: Risk-Benefit and Control-Acceptance Tensions

Researchers studying 469 Canadian youth aged 16-24 developed a negotiation-based framework to understand privacy decision-making with smart voice assistants, introducing two tension indices (RBTI and CATI) that measure competing risk-benefit and control-acceptance pressures. The study reveals that frequent SVA users exhibit benefit-dominant profiles and accept convenience trade-offs, suggesting the privacy paradox reflects negotiation rather than inconsistency.

AINeutralarXiv – CS AI · 5d ago6/10
🧠

AdaProb: Efficient Machine Unlearning via Adaptive Probability

Researchers propose AdaProb, a machine unlearning method that enables trained AI models to efficiently forget specific data while preserving privacy and complying with regulations like GDPR. The approach uses adaptive probability distributions and demonstrates 20% improvement in forgetting effectiveness with 50% less computational overhead compared to existing methods.

AINeutralarXiv – CS AI · Apr 76/10
🧠

Selective Forgetting for Large Reasoning Models

Researchers propose a new framework for 'selective forgetting' in Large Reasoning Models (LRMs) that can remove sensitive information from AI training data while preserving general reasoning capabilities. The method uses retrieval-augmented generation to identify and replace problematic reasoning segments with benign placeholders, addressing privacy and copyright concerns in AI systems.

AINeutralOpenAI News · Mar 116/10
🧠

Designing AI agents to resist prompt injection

The article discusses ChatGPT's defensive mechanisms against prompt injection attacks and social engineering attempts. It focuses on how the AI system constrains risky actions and protects sensitive data within agent workflows to maintain security and reliability.

🧠 ChatGPT
AIBullisharXiv – CS AI · Mar 37/106
🧠

Towards Privacy-Preserving LLM Inference via Collaborative Obfuscation (Technical Report)

Researchers have developed AloePri, the first privacy-preserving LLM inference method designed for industrial applications. The system uses collaborative obfuscation to protect input/output data while maintaining 96.5-100% accuracy and resisting state-of-the-art attacks, successfully tested on a 671B parameter model.

AINeutralarXiv – CS AI · Mar 35/104
🧠

Convenience vs. Control: A Qualitative Study of Youth Privacy with Smart Voice Assistants

A study of 26 young Canadians reveals that smart voice assistants' complex privacy controls and lack of transparency discourage privacy-protective behaviors among youth. Researchers propose design improvements including unified privacy hubs, plain-language data labels, and clearer retention policies to empower young users while maintaining convenience.

AINeutralarXiv – CS AI · Mar 35/105
🧠

Balancing Usability and Compliance in AI Smart Devices: A Privacy-by-Design Audit of Google Home, Alexa, and Siri

A research study analyzed privacy and usability trade-offs in AI smart devices (Google Home, Alexa, Siri) used by youth, finding that Google Home scored highest for usability while Siri led in regulatory compliance. The study revealed that while youth feel capable of managing their data, technical complexity and unclear policies limit their privacy control.

AINeutralOpenAI News · Feb 136/103
🧠

Introducing Lockdown Mode and Elevated Risk labels in ChatGPT

OpenAI introduces new security features for ChatGPT including Lockdown Mode and Elevated Risk labels to help organizations protect against prompt injection attacks and AI-driven data exfiltration. These enterprise-focused security enhancements aim to address growing concerns about AI systems being exploited for malicious data access.

Page 1 of 2Next →