y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto

#robustness-testing News & Analysis

3 articles tagged with #robustness-testing. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.

3 articles
AIBearisharXiv โ€“ CS AI ยท Apr 107/10
๐Ÿง 

Physical Adversarial Attacks on AI Surveillance Systems:Detection, Tracking, and Visible--Infrared Evasion

This research paper examines physical adversarial attacks on AI surveillance systems through a surveillance-oriented lens, emphasizing that robustness cannot be assessed from isolated image benchmarks alone. The study highlights critical gaps in current evaluation practices, including temporal persistence across frames, multi-modal sensing (visible and infrared), realistic attack carriers, and system-level objectives that must be tested under actual deployment constraints.

AINeutralarXiv โ€“ CS AI ยท Mar 177/10
๐Ÿง 

Eva-VLA: Evaluating Vision-Language-Action Models' Robustness Under Real-World Physical Variations

Researchers introduced Eva-VLA, the first unified framework to systematically evaluate the robustness of Vision-Language-Action models for robotic manipulation under real-world physical variations. Testing revealed OpenVLA exhibits over 90% failure rates across three physical variations, exposing critical weaknesses in current VLA models when deployed outside laboratory conditions.

AINeutralOpenAI News ยท Aug 226/106
๐Ÿง 

Testing robustness against unforeseen adversaries

Researchers have developed a new method to evaluate neural network classifiers' ability to defend against previously unseen adversarial attacks. The approach introduces the UAR (Unforeseen Attack Robustness) metric to assess model performance against unanticipated threats and emphasizes testing across diverse attack scenarios.