#privacy News & Analysis
315 articles tagged with #privacy. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.
XRP Ledger taps Boundless for bank-grade privacy on public blockchains
XRP Ledger is integrating Boundless' zero-knowledge technology at its base layer to enable confidential transactions while maintaining regulatory transparency. This development positions XRPL to compete in the growing market for bank-grade privacy solutions on public blockchains.
Market makers are fleeing public blockchains to protect their secret trading playbooks
Market makers are migrating from public blockchains to private or semi-private solutions to shield their trading strategies from public visibility. A startup has adapted Wall Street's institutional trading practices—specifically private order flow handling—to blockchain markets, addressing a key pain point for professional traders who face front-running and strategy exposure risks on transparent networks.
Your Push Notifications Aren’t Safe From the FBI
The article reports on three significant cybersecurity and financial crime developments: FBI access to push notifications raising privacy concerns, Iran's extended internet blackout exceeding 1,000 hours, and cryptocurrency scams reaching record theft levels in the United States.
Yoko Li: The future of AI user interfaces demands new companies, effective security measures are vital for advanced AI, and personal assistant ecosystems are rapidly evolving | AI + a16z
Yoko Li discusses how AI's evolution in personal assistant interfaces requires new companies to challenge incumbents, emphasizing that robust security measures are critical for advanced AI systems. The personal assistant ecosystem is undergoing rapid transformation as AI capabilities expand, reshaping how users interact with technology and creating opportunities beyond legacy tech platforms.
Causality Laundering: Denial-Feedback Leakage in Tool-Calling LLM Agents
Researchers have identified a new security vulnerability called 'causality laundering' in AI tool-calling systems, where attackers can extract private information by learning from system denials and using that knowledge in subsequent tool calls. They developed the Agentic Reference Monitor (ARM) system to detect and prevent these attacks through enhanced provenance tracking.
Undetectable Conversations Between AI Agents via Pseudorandom Noise-Resilient Key Exchange
Researchers demonstrate that AI agents can conduct secret communications while maintaining seemingly normal interactions, even under surveillance that knows their protocols and contexts. The study introduces pseudorandom noise-resilient key exchange protocols that enable covert coordination between AI systems without pre-shared secrets.
Opal: Private Memory for Personal AI
Researchers present Opal, a private memory system for personal AI that uses trusted hardware enclaves and oblivious RAM to protect user data privacy while maintaining query accuracy. The system achieves 13 percentage point improvement in retrieval accuracy over semantic search and 29x higher throughput with 15x lower costs than secure baselines.
AI firm leader robbed at knifepoint, says attackers sought crypto
The leader of an AI firm was robbed at knifepoint by attackers specifically seeking cryptocurrency. This incident highlights the growing trend of 'wrench attacks' targeting crypto holders and emphasizes the critical need for enhanced security measures and privacy protection in the cryptocurrency industry.
Shape and Substance: Dual-Layer Side-Channel Attacks on Local Vision-Language Models
Researchers discovered significant privacy vulnerabilities in local Vision-Language Models that use Dynamic High-Resolution preprocessing. The dual-layer attack framework can exploit execution-time variations and cache patterns to infer sensitive information about processed images, even when models run locally for privacy.
Malicious LLM-Based Conversational AI Makes Users Reveal Personal Information
Researchers conducted a study with 502 participants demonstrating that malicious LLM-based conversational AI systems can be deliberately designed to extract personal information from users through manipulative conversation strategies. The study found that these malicious chatbots significantly outperformed benign versions at collecting personal data, with social psychology-based approaches being most effective while appearing less threatening to users.
The privacy paradox: regulating zero-knowledge finance in the EU and beyond
European regulators are grappling with how to regulate zero-knowledge proof technology in finance, which promises transaction privacy while new anti-money laundering laws demand greater transparency. This regulatory tension could significantly impact the development and adoption of privacy-focused financial technologies.
Cardano Founder Says This Midnight Deal Could Bring Billions In TVL
Cardano founder Charles Hoskinson highlighted a new partnership between Midnight (Cardano's privacy network) and Monument Bank, where the UK lender plans to put retail customer deposits on a public blockchain. Hoskinson described this as potentially one of the largest commercial deals for the privacy-focused network, with the potential to bring billions in total value locked (TVL).
Uncovering Memorization in Timeseries Imputation models: LBRM Membership Inference and its link to attribute Leakage
Researchers have identified critical privacy vulnerabilities in deep learning models used for time series imputation, demonstrating that these models can leak sensitive training data through membership and attribute inference attacks. The study introduces a two-stage attack framework that successfully retrieves significant portions of training data even from models designed to be robust against overfitting-based attacks.
GitHub hits CTRL-Z, decides it will train its AI with user data after all
GitHub has reversed its previous decision and will now train its AI systems using user data from its platform. This policy change affects millions of developers who store code repositories on GitHub, raising concerns about data privacy and intellectual property rights in AI training.
BitGo Adds CIP-56 Token Standard Support on Canton Network, Enabling Custody for USDCx and cBTC
BitGo has added support for the CIP-56 token standard on Canton Network, enabling custody services for USDCx and cBTC tokens. The CIP-56 standard provides privacy-preserving transfers and atomic settlement designed for regulated financial institutions, with Canton Network now processing over $350 billion in daily on-chain assets.
Sears Exposed AI Chatbot Phone Calls and Text Chats to Anyone on the Web
Sears inadvertently exposed customer conversations with AI chatbots containing personal information and contact details to public web access. This security breach creates risks for customers by making their personal data available to potential scammers for phishing attacks and fraud.
$p^2$RAG: Privacy-Preserving RAG Service Supporting Arbitrary Top-$k$ Retrieval
Researchers propose p²RAG, a new privacy-preserving Retrieval-Augmented Generation system that supports arbitrary top-k retrieval while being 3-300x faster than existing solutions. The system uses an interactive bisection method instead of sorting and employs secret sharing across two servers to protect user prompts and database content.
VisualLeakBench: Auditing the Fragility of Large Vision-Language Models against PII Leakage and Social Engineering
Researchers introduced VisualLeakBench, a new evaluation suite that tests Large Vision-Language Models (LVLMs) for vulnerabilities to privacy attacks through visual inputs. The study found significant weaknesses in frontier AI systems like GPT-5.2, Claude-4, Gemini-3 Flash, and Grok-4, with Claude-4 showing the highest PII leakage rate at 74.4% despite having strong OCR attack resistance.
AI Evasion and Impersonation Attacks on Facial Re-Identification with Activation Map Explanations
Researchers developed a novel framework for generating adversarial patches that can fool facial recognition systems through both evasion and impersonation attacks. The method reduces facial recognition accuracy from 90% to 0.4% in white-box settings and demonstrates strong cross-model generalization, highlighting critical vulnerabilities in surveillance systems.
Membership Inference for Contrastive Pre-training Models with Text-only PII Queries
Researchers developed UMID, a new text-only auditing framework to detect if personally identifiable information was memorized during training of multimodal AI models like CLIP and CLAP. The method significantly improves efficiency and effectiveness of membership inference attacks while maintaining privacy constraints.










