11 articles tagged with #ai-privacy. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.
AIBullisharXiv โ CS AI ยท 1d ago7/10
๐ง Researchers introduce RePAIR, a framework enabling users to instruct large language models to forget harmful knowledge, misinformation, and personal data through natural language prompts at inference time. The system uses a training-free method called STAMP that manipulates model activations to achieve selective unlearning with minimal computational overhead, outperforming existing approaches while preserving model utility.
AIBearishWired โ AI ยท 2d ago7/10
๐ง Over 70 civil rights organizations, including the ACLU and EPIC, have formally warned against Meta's facial recognition technology in smart glasses, citing serious risks to vulnerable populations including abuse victims, immigrants, and LGBTQ+ individuals. The coalition argues the AI feature could enable stalking, harassment, and discrimination at scale.
AIBullisharXiv โ CS AI ยท 6d ago7/10
๐ง Researchers introduce ConfusionPrompt, a privacy framework for large language models that decomposes user prompts into smaller sub-prompts mixed with pseudo-prompts before sending to cloud servers. The method protects user privacy while maintaining higher utility than existing perturbation-based approaches and works with existing black-box LLMs without modification.
AINeutralarXiv โ CS AI ยท Mar 47/105
๐ง Researchers introduce Federated Inference (FI), a new collaborative paradigm where independently trained AI models can work together at inference time without sharing data or model parameters. The study identifies key requirements including privacy preservation and performance gains, while highlighting system-level challenges that differ from traditional federated learning approaches.
AIBearisharXiv โ CS AI ยท Mar 47/102
๐ง Researchers have identified a critical privacy vulnerability in multi-modal large reasoning models (MLRMs) where adversaries can infer users' sensitive location information from images, including home addresses from selfies. The study introduces DoxBench dataset and demonstrates that 11 advanced MLRMs consistently outperform humans in geolocation inference, significantly lowering barriers for privacy attacks.
AIBearisharXiv โ CS AI ยท Mar 37/104
๐ง Researchers have developed AudAgent, an automated tool that monitors AI agents in real-time to ensure they comply with their stated privacy policies. The tool revealed that many AI agents powered by major providers like Claude, Gemini, and DeepSeek fail to protect highly sensitive data like SSNs and violate their own privacy policies.
$LINK
AIBullisharXiv โ CS AI ยท Mar 166/10
๐ง Researchers propose a new "structure-faithful" framework for machine unlearning that preserves semantic relationships in AI models while removing specific data. The method uses semantic anchors to maintain knowledge structure, showing significant performance improvements of 19-33% across image classification, retrieval, and face recognition tasks.
AINeutralarXiv โ CS AI ยท Mar 35/105
๐ง A research study analyzed privacy and usability trade-offs in AI smart devices (Google Home, Alexa, Siri) used by youth, finding that Google Home scored highest for usability while Siri led in regulatory compliance. The study revealed that while youth feel capable of managing their data, technical complexity and unclear policies limit their privacy control.
AIBullisharXiv โ CS AI ยท Mar 26/1017
๐ง Researchers developed a method to train AI reasoning models to follow privacy instructions in their internal reasoning traces, not just final answers. The approach uses separate LoRA adapters and achieves up to 51.9% improvement on privacy benchmarks, though with some trade-offs in task performance.
AIBullishOpenAI News ยท Aug 286/107
๐ง OpenAI announces ChatGPT Enterprise, a new business-focused version of their AI chatbot offering enhanced security, privacy features, and more powerful capabilities. This represents OpenAI's strategic push into the enterprise market with premium AI services.
AINeutralGoogle Research Blog ยท Oct 305/107
๐ง The article discusses developments in creating privacy-preserving methods for analyzing AI system usage. This represents ongoing efforts to balance transparency needs with privacy protection in AI deployment and monitoring.