y0news
AnalyticsDigestsSourcesRSSAICrypto
#openvla1 article
1 articles
AINeutralarXiv โ€“ CS AI ยท 10h ago7/10
๐Ÿง 

Eva-VLA: Evaluating Vision-Language-Action Models' Robustness Under Real-World Physical Variations

Researchers introduced Eva-VLA, the first unified framework to systematically evaluate the robustness of Vision-Language-Action models for robotic manipulation under real-world physical variations. Testing revealed OpenVLA exhibits over 90% failure rates across three physical variations, exposing critical weaknesses in current VLA models when deployed outside laboratory conditions.