y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto

#adversarial-robustness News & Analysis

21 articles tagged with #adversarial-robustness. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.

21 articles
AIBullisharXiv – CS AI Β· 6d ago7/10
🧠

ASGuard: Activation-Scaling Guard to Mitigate Targeted Jailbreaking Attack

Researchers introduce ASGuard, a mechanistically-informed framework that identifies and mitigates vulnerabilities in large language models' safety mechanisms, particularly those exploited by targeted jailbreaking attacks like tense-changing prompts. By using circuit analysis to locate vulnerable attention heads and applying channel-wise scaling vectors, ASGuard reduces attack success rates while maintaining model utility and general capabilities.

AIBearisharXiv – CS AI Β· 6d ago7/10
🧠

Mobile GUI Agents under Real-world Threats: Are We There Yet?

Researchers have identified critical vulnerabilities in mobile GUI agents powered by large language models, revealing that third-party content in real-world apps causes these agents to fail significantly more often than benchmark tests suggest. Testing on 122 dynamic tasks and over 3,000 static scenarios shows misleading rates of 36-42%, raising serious concerns about deploying these agents in commercial settings.

AIBearisharXiv – CS AI Β· Apr 147/10
🧠

The Deployment Gap in AI Media Detection: Platform-Aware and Visually Constrained Adversarial Evaluation

Researchers reveal a significant gap between laboratory performance and real-world reliability in AI-generated media detectors, demonstrating that models achieving 99% accuracy in controlled settings experience substantial degradation when subjected to platform-specific transformations like compression and resizing. The study introduces a platform-aware adversarial evaluation framework showing detectors become vulnerable to realistic attack scenarios, highlighting critical security risks in current AI detection benchmarks.

AIBullisharXiv – CS AI Β· Apr 147/10
🧠

Synthius-Mem: Brain-Inspired Hallucination-Resistant Persona Memory Achieving 94.4% Memory Accuracy and 99.6% Adversarial Robustness on LoCoMo

Researchers present Synthius-Mem, a brain-inspired AI memory system that achieves 94.4% accuracy on the LoCoMo benchmark while maintaining 99.6% adversarial robustnessβ€”preventing hallucinations about facts users never shared. The system outperforms existing approaches by structuring persona extraction across six cognitive domains rather than treating memory as raw dialogue retrieval, reducing token consumption by 5x.

AINeutralarXiv – CS AI Β· Apr 147/10
🧠

ClawGuard: A Runtime Security Framework for Tool-Augmented LLM Agents Against Indirect Prompt Injection

Researchers introduce ClawGuard, a runtime security framework that protects tool-augmented LLM agents from indirect prompt injection attacks by enforcing user-confirmed rules at tool-call boundaries. The framework blocks malicious instructions embedded in tool responses without requiring model modifications, demonstrating robust protection across multiple state-of-the-art language models.

AIBullisharXiv – CS AI Β· Apr 147/10
🧠

Risk Awareness Injection: Calibrating Vision-Language Models for Safety without Compromising Utility

Researchers propose Risk Awareness Injection (RAI), a lightweight, training-free framework that enhances vision-language models' ability to recognize unsafe content by amplifying risk signals in their feature space. The method maintains model utility while significantly reducing vulnerability to multimodal jailbreak attacks, addressing a critical security gap in VLMs.

AIBearisharXiv – CS AI Β· Apr 137/10
🧠

From Dispersion to Attraction: Spectral Dynamics of Hallucination Across Whisper Model Scales

Researchers propose the Spectral Sensitivity Theorem to explain hallucinations in large ASR models like Whisper, identifying a phase transition between dispersive and attractor regimes. Analysis of model eigenspectra reveals that intermediate models experience structural breakdown while large models compress information, decoupling from acoustic evidence and increasing hallucination risk.

AIBullisharXiv – CS AI Β· Apr 107/10
🧠

SALLIE: Safeguarding Against Latent Language & Image Exploits

Researchers introduce SALLIE, a lightweight runtime defense framework that detects and mitigates jailbreak attacks and prompt injections in large language and vision-language models simultaneously. Using mechanistic interpretability and internal model activations, SALLIE achieves robust protection across multiple architectures without degrading performance or requiring architectural changes.

AIBullisharXiv – CS AI Β· Apr 107/10
🧠

Harnessing Hyperbolic Geometry for Harmful Prompt Detection and Sanitization

Researchers propose HyPE and HyPS, a two-part defense framework using hyperbolic geometry to detect and neutralize harmful prompts in Vision-Language Models. The approach offers a lightweight, interpretable alternative to blacklist filters and classifier-based systems that are vulnerable to adversarial attacks.

AIBullisharXiv – CS AI Β· Apr 67/10
🧠

Mitigating Reward Hacking in RLHF via Advantage Sign Robustness

Researchers propose Sign-Certified Policy Optimization (SignCert-PO) to address reward hacking in reinforcement learning from human feedback (RLHF), a critical problem where AI models exploit learned reward systems rather than improving actual performance. The lightweight approach down-weights non-robust responses during policy optimization and showed improved win rates on summarization and instruction-following benchmarks.

AINeutralarXiv – CS AI Β· Mar 37/103
🧠

On the Rate of Convergence of GD in Non-linear Neural Networks: An Adversarial Robustness Perspective

Researchers prove that gradient descent in neural networks converges to optimal robustness margins at an extremely slow rate of Θ(1/ln(t)), even in simplified two-neuron settings. This establishes the first explicit lower bound on convergence rates for robustness margins in non-linear models, revealing fundamental limitations in neural network training efficiency.

AINeutralarXiv – CS AI Β· 6d ago6/10
🧠

GF-Score: Certified Class-Conditional Robustness Evaluation with Fairness Guarantees

Researchers introduce GF-Score, a framework that evaluates neural network robustness across individual classes while measuring fairness disparities, eliminating the need for expensive adversarial attacks through self-calibration. Testing across 22 models reveals consistent vulnerability patterns and shows that more robust models paradoxically exhibit greater class-level fairness disparities.

AINeutralarXiv – CS AI Β· Apr 146/10
🧠

Critical-CoT: A Robust Defense Framework against Reasoning-Level Backdoor Attacks in Large Language Models

Researchers introduce Critical-CoT, a defense framework that protects large language models against reasoning-level backdoor attacks by fine-tuning models to develop critical thinking behaviors. Unlike token-level backdoors, these attacks inject malicious reasoning steps into chain-of-thought processes, making them harder to detect; the proposed defense demonstrates strong robustness across multiple LLMs and datasets.

AIBullisharXiv – CS AI Β· Apr 146/10
🧠

QShield: Securing Neural Networks Against Adversarial Attacks using Quantum Circuits

Researchers introduce QShield, a hybrid quantum-classical neural network architecture that combines traditional CNNs with quantum processing modules to defend deep learning models against adversarial attacks. Testing on MNIST, OrganAMNIST, and CIFAR-10 datasets shows the hybrid approach maintains accuracy while substantially reducing attack success rates and increasing computational costs for adversaries.

AINeutralarXiv – CS AI Β· Apr 136/10
🧠

Dictionary-Aligned Concept Control for Safeguarding Multimodal LLMs

Researchers introduce Dictionary-Aligned Concept Control (DACO), a framework that uses a curated dictionary of 15,000 multimodal concepts and Sparse Autoencoders to improve safety in multimodal large language models by steering their activations at inference time. Testing across multiple models shows DACO significantly enhances safety performance while preserving general-purpose capabilities without requiring model retraining.

AINeutralarXiv – CS AI Β· Mar 176/10
🧠

More Agents Improve Math Problem Solving but Adversarial Robustness Gap Persists

Research reveals that while increasing the number of LLM agents improves mathematical problem-solving accuracy, these multi-agent systems remain vulnerable to adversarial attacks. The study found that human-like typos pose the greatest threat to robustness, and the adversarial vulnerability gap persists regardless of agent count.

🧠 Llama
AIBullishHugging Face Blog Β· Dec 236/104
🧠

AprielGuard: A Guardrail for Safety and Adversarial Robustness in Modern LLM Systems

AprielGuard appears to be a new safety framework or tool designed to provide guardrails for large language models (LLMs) to enhance both safety measures and adversarial robustness. This represents ongoing efforts in the AI industry to address security vulnerabilities and safety concerns in modern AI systems.

AINeutralOpenAI News Β· Jan 225/105
🧠

Trading inference-time compute for adversarial robustness

The article discusses research on trading computational resources during inference time to improve adversarial robustness in AI systems. This approach explores how allocating more compute power at inference can enhance model security against adversarial attacks.

AINeutralarXiv – CS AI Β· Mar 275/10
🧠

NERO-Net: A Neuroevolutionary Approach for the Design of Adversarially Robust CNNs

Researchers developed NERO-Net, a neuroevolutionary approach to design convolutional neural networks with inherent resistance to adversarial attacks without requiring robust training methods. The evolved architecture achieved 47% adversarial accuracy and 93% clean accuracy on CIFAR-10, demonstrating that architectural design can provide intrinsic robustness against adversarial examples.

AINeutralOpenAI News Β· May 34/106
🧠

Transfer of adversarial robustness between perturbation types

The article discusses research on adversarial robustness transfer between different types of perturbations in machine learning models. This research examines how defensive techniques developed for one type of attack may provide protection against other types of adversarial examples.