y0news
AnalyticsDigestsSourcesRSSAICrypto
#mobile-manipulation1 article
1 articles
AIBullisharXiv โ€“ CS AI ยท 10h ago6/10
๐Ÿง 

AnoleVLA: Lightweight Vision-Language-Action Model with Deep State Space Models for Mobile Manipulation

Researchers have developed AnoleVLA, a lightweight Vision-Language-Action model for robotic manipulation that uses deep state space models instead of traditional transformers. The model achieved 21 points higher task success rate than large-scale VLAs while running three times faster, making it suitable for resource-constrained robotic applications.