AINeutralarXiv โ CS AI ยท 10h ago7/10
๐ง
Eva-VLA: Evaluating Vision-Language-Action Models' Robustness Under Real-World Physical Variations
Researchers introduced Eva-VLA, the first unified framework to systematically evaluate the robustness of Vision-Language-Action models for robotic manipulation under real-world physical variations. Testing revealed OpenVLA exhibits over 90% failure rates across three physical variations, exposing critical weaknesses in current VLA models when deployed outside laboratory conditions.