32 articles tagged with #data-protection. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.
AI × CryptoBullisharXiv – CS AI · 1d ago7/10
🤖Researchers have developed presidio-hardened-x402, an open-source middleware that filters personally identifiable information from AI agent payment requests using the x402 protocol before data reaches payment servers or centralized APIs. The tool achieves 97.2% precision in detecting PII with minimal latency, addressing a critical privacy gap where payment metadata is currently transmitted without data processing agreements.
AIBullisharXiv – CS AI · 1d ago7/10
🧠Researchers propose RPSG, a novel method for generating synthetic data from private text using large language models while maintaining differential privacy protections. The approach uses private seeds and formal privacy mechanisms during candidate selection, achieving high fidelity synthetic data with stronger privacy guarantees than existing methods.
AIBullisharXiv – CS AI · Apr 67/10
🧠Researchers present Opal, a private memory system for personal AI that uses trusted hardware enclaves and oblivious RAM to protect user data privacy while maintaining query accuracy. The system achieves 13 percentage point improvement in retrieval accuracy over semantic search and 29x higher throughput with 15x lower costs than secure baselines.
AIBearishArs Technica – AI · Mar 57/10
🧠Meta faces accusations of concealing privacy facts about Ray-Ban smart glasses after workers reported viewing footage of people in bathrooms. The allegations raise serious concerns about user privacy and data handling practices for wearable AI devices.
AIBearishTechCrunch – AI · Mar 57/10
🧠Meta faces a lawsuit over privacy concerns regarding its AI smart glasses, with allegations that the company's marketing promised user control while subcontractors were actually reviewing customer footage including sensitive content. The legal action centers on discrepancies between Meta's privacy promises and actual data handling practices.
AINeutralarXiv – CS AI · Mar 47/102
🧠Researchers introduce WARP, a new defense mechanism for machine unlearning protocols that protects against privacy attacks where adversaries can exploit differences between pre- and post-unlearning AI models. The technique reduces attack success rates by up to 92% while maintaining model accuracy on retained data.
AIBearisharXiv – CS AI · Mar 37/103
🧠Researchers introduce Multi-PA, a comprehensive benchmark for evaluating privacy risks in Large Vision-Language Models (LVLMs), covering 26 personal privacy categories, 15 trade secrets, and 18 state secrets across 31,962 samples. Testing 21 open-source and 2 closed-source LVLMs revealed significant privacy vulnerabilities, with models generally posing high risks of facilitating privacy breaches across different privacy categories.
AIBullisharXiv – CS AI · Mar 37/102
🧠Researchers propose Partial Model Collapse (PMC), a novel machine unlearning method for large language models that removes private information without directly training on sensitive data. The approach leverages model collapse - where models degrade when trained on their own outputs - as a feature to deliberately forget targeted information while preserving general utility.
AIBearisharXiv – CS AI · Mar 37/104
🧠Researchers have developed AudAgent, an automated tool that monitors AI agents in real-time to ensure they comply with their stated privacy policies. The tool revealed that many AI agents powered by major providers like Claude, Gemini, and DeepSeek fail to protect highly sensitive data like SSNs and violate their own privacy policies.
$LINK
AIBearisharXiv – CS AI · Feb 277/107
🧠Researchers demonstrate that large language models can successfully deanonymize pseudonymous users across online platforms at scale, achieving up to 68% recall at 90% precision. The study shows LLMs can match users between platforms like Hacker News and LinkedIn, or across Reddit communities, using only unstructured text data.
$NEAR
AINeutralOpenAI News · Nov 127/106
🧠OpenAI is resisting the New York Times' request for access to 20 million private ChatGPT conversations, while simultaneously implementing enhanced security and privacy protections for user data. This legal dispute highlights growing tensions over data privacy and corporate access to AI conversation logs.
AIBullishGoogle DeepMind Blog · Oct 237/104
🧠VaultGemma represents a breakthrough as the most capable large language model trained from scratch using differential privacy techniques. This development advances privacy-preserving AI by demonstrating that sophisticated models can be built while maintaining strong data protection guarantees.
AINeutralOpenAI News · Jun 57/106
🧠OpenAI is challenging a court order from The New York Times that would require indefinite retention of ChatGPT and API user data. The company is fighting the demands to protect user privacy while addressing legal requirements and maintaining data protection commitments.
AI × CryptoBullishHugging Face Blog · Aug 27/106
🤖The article discusses the development of encrypted large language models using Fully Homomorphic Encryption (FHE) technology. This approach would allow AI models to process data while keeping it encrypted, potentially addressing privacy concerns in AI applications.
AIBullishAI News · 10h ago6/10
🧠Commvault has launched AI Protect, a governance solution that provides rollback capabilities for autonomous AI agents operating in cloud environments. The platform addresses critical risks posed by AI systems that can independently delete files, access databases, modify infrastructure, and alter security policies without adequate oversight or recovery mechanisms.
AINeutralarXiv – CS AI · 5d ago6/10
🧠Researchers studying 469 Canadian youth aged 16-24 developed a negotiation-based framework to understand privacy decision-making with smart voice assistants, introducing two tension indices (RBTI and CATI) that measure competing risk-benefit and control-acceptance pressures. The study reveals that frequent SVA users exhibit benefit-dominant profiles and accept convenience trade-offs, suggesting the privacy paradox reflects negotiation rather than inconsistency.
AINeutralarXiv – CS AI · 5d ago6/10
🧠Researchers propose AdaProb, a machine unlearning method that enables trained AI models to efficiently forget specific data while preserving privacy and complying with regulations like GDPR. The approach uses adaptive probability distributions and demonstrates 20% improvement in forgetting effectiveness with 50% less computational overhead compared to existing methods.
AINeutralarXiv – CS AI · Apr 76/10
🧠Researchers propose a new framework for 'selective forgetting' in Large Reasoning Models (LRMs) that can remove sensitive information from AI training data while preserving general reasoning capabilities. The method uses retrieval-augmented generation to identify and replace problematic reasoning segments with benign placeholders, addressing privacy and copyright concerns in AI systems.
AINeutralarXiv – CS AI · Mar 176/10
🧠Researchers developed a framework to assess public summaries of AI training data required by EU's AI Act Article 53(1)(d), evaluating transparency and usefulness for stakeholder rights enforcement. The study analyzed 5 public summaries from GPAI model providers as of January 2026, creating guidelines for compliance and a public resource website.
AINeutralOpenAI News · Mar 116/10
🧠The article discusses ChatGPT's defensive mechanisms against prompt injection attacks and social engineering attempts. It focuses on how the AI system constrains risky actions and protects sensitive data within agent workflows to maintain security and reliability.
🧠 ChatGPT
AIBullisharXiv – CS AI · Mar 37/107
🧠Researchers propose Talaria, a new confidential inference framework that protects client data privacy when using cloud-hosted Large Language Models. The system partitions LLM operations between client-controlled environments and cloud GPUs, reducing token reconstruction attacks from 97.5% to 1.34% accuracy while maintaining model performance.
AIBullisharXiv – CS AI · Mar 37/106
🧠Researchers have developed AloePri, the first privacy-preserving LLM inference method designed for industrial applications. The system uses collaborative obfuscation to protect input/output data while maintaining 96.5-100% accuracy and resisting state-of-the-art attacks, successfully tested on a 671B parameter model.
AINeutralarXiv – CS AI · Mar 35/104
🧠A study of 26 young Canadians reveals that smart voice assistants' complex privacy controls and lack of transparency discourage privacy-protective behaviors among youth. Researchers propose design improvements including unified privacy hubs, plain-language data labels, and clearer retention policies to empower young users while maintaining convenience.
AINeutralarXiv – CS AI · Mar 35/105
🧠A research study analyzed privacy and usability trade-offs in AI smart devices (Google Home, Alexa, Siri) used by youth, finding that Google Home scored highest for usability while Siri led in regulatory compliance. The study revealed that while youth feel capable of managing their data, technical complexity and unclear policies limit their privacy control.
AINeutralOpenAI News · Feb 136/103
🧠OpenAI introduces new security features for ChatGPT including Lockdown Mode and Elevated Risk labels to help organizations protect against prompt injection attacks and AI-driven data exfiltration. These enterprise-focused security enhancements aim to address growing concerns about AI systems being exploited for malicious data access.