y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto

#motion-planning News & Analysis

5 articles tagged with #motion-planning. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.

5 articles
AIBullisharXiv โ€“ CS AI ยท Mar 57/10
๐Ÿง 

Large-Language-Model-Guided State Estimation for Partially Observable Task and Motion Planning

Researchers developed CoCo-TAMP, a robot planning framework that uses large language models to improve state estimation in partially observable environments. The system leverages LLMs' common-sense reasoning to predict object locations and co-locations, achieving 62-73% reduction in planning time compared to baseline methods.

AIBullisharXiv โ€“ CS AI ยท Mar 96/10
๐Ÿง 

XR-DT: Extended Reality-Enhanced Digital Twin for Safe Motion Planning via Human-Aware Model Predictive Path Integral Control

Researchers developed XR-DT, an Extended Reality-enhanced Digital Twin framework that combines augmented, virtual, and mixed reality to improve human-robot interaction in shared workspaces. The system uses a novel Human-Aware Model Predictive Path Integral control model with ATLAS, a Transformer-based trajectory prediction system, to enable safer and more interpretable robot navigation around humans.

AINeutralarXiv โ€“ CS AI ยท Mar 264/10
๐Ÿง 

Toward Generalist Neural Motion Planners for Robotic Manipulators: Challenges and Opportunities

Researchers have published a comprehensive review analyzing state-of-the-art neural motion planners for robotic manipulators, highlighting their benefits in fast inference but limitations in generalizing to unseen environments. The paper outlines a path toward developing generalist neural motion planners that could better handle domain-specific challenges in cluttered, real-world environments.

AINeutralarXiv โ€“ CS AI ยท Mar 164/10
๐Ÿง 

Evaluating VLMs' Spatial Reasoning Over Robot Motion: A Step Towards Robot Planning with Motion Preferences

Researchers evaluated four state-of-the-art Vision-Language Models (VLMs) on their ability to perform spatial reasoning for robot motion planning. Qwen2.5-VL achieved the highest performance at 71.4% accuracy zero-shot and 75% after fine-tuning, while GPT-4o showed lower performance in handling motion preferences and spatial constraints.

๐Ÿง  GPT-4