y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto

#ai-privacy News & Analysis

11 articles tagged with #ai-privacy. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.

11 articles
AIBullisharXiv โ€“ CS AI ยท 1d ago7/10
๐Ÿง 

RePAIR: Interactive Machine Unlearning through Prompt-Aware Model Repair

Researchers introduce RePAIR, a framework enabling users to instruct large language models to forget harmful knowledge, misinformation, and personal data through natural language prompts at inference time. The system uses a training-free method called STAMP that manipulates model activations to achieve selective unlearning with minimal computational overhead, outperforming existing approaches while preserving model utility.

AIBearishWired โ€“ AI ยท 2d ago7/10
๐Ÿง 

Meta Is Warned That Facial Recognition Glasses Will Arm Sexual Predators

Over 70 civil rights organizations, including the ACLU and EPIC, have formally warned against Meta's facial recognition technology in smart glasses, citing serious risks to vulnerable populations including abuse victims, immigrants, and LGBTQ+ individuals. The coalition argues the AI feature could enable stalking, harassment, and discrimination at scale.

Meta Is Warned That Facial Recognition Glasses Will Arm Sexual Predators
AIBullisharXiv โ€“ CS AI ยท 6d ago7/10
๐Ÿง 

ConfusionPrompt: Practical Private Inference for Online Large Language Models

Researchers introduce ConfusionPrompt, a privacy framework for large language models that decomposes user prompts into smaller sub-prompts mixed with pseudo-prompts before sending to cloud servers. The method protects user privacy while maintaining higher utility than existing perturbation-based approaches and works with existing black-box LLMs without modification.

AINeutralarXiv โ€“ CS AI ยท Mar 47/105
๐Ÿง 

Federated Inference: Toward Privacy-Preserving Collaborative and Incentivized Model Serving

Researchers introduce Federated Inference (FI), a new collaborative paradigm where independently trained AI models can work together at inference time without sharing data or model parameters. The study identifies key requirements including privacy preservation and performance gains, while highlighting system-level challenges that differ from traditional federated learning approaches.

AIBearisharXiv โ€“ CS AI ยท Mar 47/102
๐Ÿง 

Doxing via the Lens: Revealing Location-related Privacy Leakage on Multi-modal Large Reasoning Models

Researchers have identified a critical privacy vulnerability in multi-modal large reasoning models (MLRMs) where adversaries can infer users' sensitive location information from images, including home addresses from selfies. The study introduces DoxBench dataset and demonstrates that 11 advanced MLRMs consistently outperform humans in geolocation inference, significantly lowering barriers for privacy attacks.

AIBearisharXiv โ€“ CS AI ยท Mar 37/104
๐Ÿง 

AudAgent: Automated Auditing of Privacy Policy Compliance in AI Agents

Researchers have developed AudAgent, an automated tool that monitors AI agents in real-time to ensure they comply with their stated privacy policies. The tool revealed that many AI agents powered by major providers like Claude, Gemini, and DeepSeek fail to protect highly sensitive data like SSNs and violate their own privacy policies.

$LINK
AIBullisharXiv โ€“ CS AI ยท Mar 166/10
๐Ÿง 

Stake the Points: Structure-Faithful Instance Unlearning

Researchers propose a new "structure-faithful" framework for machine unlearning that preserves semantic relationships in AI models while removing specific data. The method uses semantic anchors to maintain knowledge structure, showing significant performance improvements of 19-33% across image classification, retrieval, and face recognition tasks.

AINeutralarXiv โ€“ CS AI ยท Mar 35/105
๐Ÿง 

Balancing Usability and Compliance in AI Smart Devices: A Privacy-by-Design Audit of Google Home, Alexa, and Siri

A research study analyzed privacy and usability trade-offs in AI smart devices (Google Home, Alexa, Siri) used by youth, finding that Google Home scored highest for usability while Siri led in regulatory compliance. The study revealed that while youth feel capable of managing their data, technical complexity and unclear policies limit their privacy control.

AIBullisharXiv โ€“ CS AI ยท Mar 26/1017
๐Ÿง 

Controllable Reasoning Models Are Private Thinkers

Researchers developed a method to train AI reasoning models to follow privacy instructions in their internal reasoning traces, not just final answers. The approach uses separate LoRA adapters and achieves up to 51.9% improvement on privacy benchmarks, though with some trade-offs in task performance.

AIBullishOpenAI News ยท Aug 286/107
๐Ÿง 

Introducing ChatGPT Enterprise

OpenAI announces ChatGPT Enterprise, a new business-focused version of their AI chatbot offering enhanced security, privacy features, and more powerful capabilities. This represents OpenAI's strategic push into the enterprise market with premium AI services.

AINeutralGoogle Research Blog ยท Oct 305/107
๐Ÿง 

Toward provably private insights into AI use

The article discusses developments in creating privacy-preserving methods for analyzing AI system usage. This represents ongoing efforts to balance transparency needs with privacy protection in AI deployment and monitoring.