56 articles tagged with #adversarial-attacks. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.
AIBullisharXiv – CS AI · Mar 57/10
🧠Researchers developed Conflict-aware Evidential Deep Learning (C-EDL), a new uncertainty quantification approach that significantly improves AI model reliability against adversarial attacks and out-of-distribution data. The method achieves up to 90% reduction in adversarial data coverage and 55% reduction in out-of-distribution data coverage without requiring model retraining.
AIBearisharXiv – CS AI · Mar 46/102
🧠Researchers developed a new AI attack method that can fool speaker recognition systems with 10x fewer attempts than previous approaches. The technique uses feature-aligned inversion to optimize attacks in latent space, achieving up to 91.65% success rate with only 50 queries.
AINeutralarXiv – CS AI · Mar 47/102
🧠Researchers introduce WARP, a new defense mechanism for machine unlearning protocols that protects against privacy attacks where adversaries can exploit differences between pre- and post-unlearning AI models. The technique reduces attack success rates by up to 92% while maintaining model accuracy on retained data.
AINeutralarXiv – CS AI · Mar 37/104
🧠Researchers identify a 'safety mirage' problem in vision language models where supervised fine-tuning creates spurious correlations that make models vulnerable to simple attacks and overly cautious with benign queries. They propose machine unlearning as an alternative that reduces attack success rates by up to 60.27% and unnecessary rejections by over 84.20%.
AINeutralarXiv – CS AI · Mar 37/103
🧠Researchers propose TDAE, a new defense framework that protects images from malicious AI-powered edits by using imperceptible perturbations and coordinated image-text optimization. The system employs FlatGrad Defense Mechanism for visual protection and Dynamic Prompt Defense for textual enhancement, achieving better cross-model transferability than existing methods.
AINeutralarXiv – CS AI · Feb 277/105
🧠Researchers introduce HubScan, an open-source security scanner that detects 'hubness poisoning' attacks in Retrieval-Augmented Generation (RAG) systems. The tool achieves 90% recall at detecting adversarial content that exploits vector similarity search vulnerabilities, addressing a critical security flaw in AI systems that rely on external knowledge retrieval.
AINeutralarXiv – CS AI · Feb 277/106
🧠Researchers have conducted a comprehensive review of adversarial transferability in image classification, identifying gaps in standardized evaluation frameworks for transfer-based attacks. They propose a benchmark framework and categorize existing attacks into six distinct types to address biased assessments in current research.
AINeutralarXiv – CS AI · Feb 277/106
🧠Researchers propose Random Parameter Pruning Attack (RaPA), a new method that improves targeted adversarial attacks by randomly pruning model parameters during optimization. The technique achieves up to 11.7% higher attack success rates when transferring from CNN to Transformer models compared to existing methods.
AINeutralLil'Log (Lilian Weng) · Oct 257/10
🧠Large language models like ChatGPT face security challenges from adversarial attacks and jailbreak prompts that can bypass safety measures implemented during alignment processes like RLHF. Unlike image-based attacks that operate in continuous space, text-based adversarial attacks are more challenging due to the discrete nature of language and lack of direct gradient signals.
🏢 OpenAI🧠 ChatGPT
AIBearishOpenAI News · Jul 177/106
🧠Researchers have developed adversarial images that can consistently fool neural network classifiers across multiple scales and viewing perspectives. This breakthrough challenges previous assumptions that self-driving cars would be secure from malicious attacks due to their multi-angle image capture capabilities.
AINeutralarXiv – CS AI · 2d ago6/10
🧠Researchers propose CanaryRAG, a runtime defense mechanism that protects Retrieval-Augmented Generation systems from adversarial attacks that extract proprietary data from knowledge bases. The solution uses embedded canary tokens to detect leakage in real-time while maintaining normal system performance, offering a practical safeguard for organizations deploying RAG-based AI systems.
AINeutralarXiv – CS AI · 3d ago6/10
🧠Researchers introduce ImageProtector, a user-side defense mechanism that embeds imperceptible perturbations into images to prevent multi-modal large language models from analyzing them. When adversaries attempt to extract sensitive information from protected images, MLLMs are induced to refuse analysis, though potential countermeasures exist that may partially mitigate the technique's effectiveness.
AIBearisharXiv – CS AI · 3d ago6/10
🧠Researchers demonstrate a white-box adversarial attack on computer vision models using SHAP values to identify and exploit critical input features, showing superior robustness compared to the Fast Gradient Sign Method, particularly when gradient information is obscured or hidden.
AIBearisharXiv – CS AI · Mar 266/10
🧠Researchers propose PoiCGAN, a new targeted poisoning attack method for federated learning that uses feature-label joint perturbation to bypass detection mechanisms. The attack achieves 83.97% higher success rates than existing methods while maintaining model performance with less than 8.87% accuracy reduction.
AIBearisharXiv – CS AI · Mar 176/10
🧠Researchers discovered that skip connections in deep neural networks make adversarial attacks more transferable across different AI models. They developed the Skip Gradient Method (SGM) which exploits this vulnerability in ResNets, Vision Transformers, and even Large Language Models to create more effective adversarial examples.
AIBearisharXiv – CS AI · Mar 176/10
🧠A new research study reveals that AI judges used to evaluate the safety of large language models perform poorly when assessing adversarial attacks, often degrading to near-random accuracy. The research analyzed 6,642 human-verified labels and found that many attacks artificially inflate their success rates by exploiting judge weaknesses rather than generating genuinely harmful content.
AINeutralarXiv – CS AI · Mar 126/10
🧠Researchers propose Contract And Conquer (CAC), a new method for provably generating adversarial examples against black-box neural networks using knowledge distillation and search space contraction. The approach provides theoretical guarantees for finding adversarial examples within a fixed number of iterations and outperforms existing methods on ImageNet datasets including vision transformers.
AIBearisharXiv – CS AI · Mar 37/108
🧠Researchers introduced the Synthetic Web Benchmark, revealing that frontier AI language models fail catastrophically when exposed to high-plausibility misinformation in search results. The study shows current AI agents struggle to handle conflicting information sources, with accuracy collapsing despite access to truthful content.
AIBearisharXiv – CS AI · Mar 37/109
🧠Researchers evaluated Naturalistic Adversarial Patches (NAPs) that can fool autonomous vehicle traffic sign detection systems in physical environments. The study used a custom dataset and YOLOv5 model to generate patches that successfully reduced STOP sign detection confidence across various real-world testing conditions.
AIBullisharXiv – CS AI · Mar 37/108
🧠Researchers have developed quantum optimization models for robust verification of deep neural networks against adversarial attacks. The approach provides exact verification for ReLU networks and asymptotically complete verification for networks with general activation functions like sigmoid and tanh.
AIBullisharXiv – CS AI · Mar 37/107
🧠Researchers introduce ROKA, a new machine unlearning method that prevents knowledge contamination and indirect attacks on AI models. The approach uses 'Neural Healing' to preserve important knowledge while forgetting targeted data, providing theoretical guarantees for knowledge preservation during unlearning.
AIBearisharXiv – CS AI · Mar 37/107
🧠Researchers have developed CaptionFool, a universal adversarial attack that can manipulate AI image captioning models by modifying just 1.2% of image patches. The attack achieves 94-96% success rates in forcing models to generate arbitrary captions, including offensive content that can bypass content moderation systems.
AIBearisharXiv – CS AI · Mar 37/106
🧠Researchers developed AdvBandit, a new black-box adversarial attack method that can exploit neural contextual bandits by poisoning context data without requiring access to internal model parameters. The attack uses bandit theory and inverse reinforcement learning to adaptively learn victim policies and optimize perturbations, achieving higher victim regret than existing methods.
AIBullisharXiv – CS AI · Mar 36/105
🧠Researchers developed AMDS, an attack-aware multi-stage defense system for network intrusion detection that uses adaptive weight learning to counter adversarial attacks. The system achieved 94.2% AUC and improved classification accuracy by 4.5 percentage points over existing adversarially trained ensembles by learning attack-specific detection strategies.
$CRV
AIBearisharXiv – CS AI · Mar 37/106
🧠Researchers discovered that dataset distillation, a technique for compressing large datasets into smaller synthetic ones, has serious privacy vulnerabilities. The study introduces an Information Revelation Attack (IRA) that can extract sensitive information from synthetic datasets, including predicting the distillation algorithm, model architecture, and recovering original training samples.