y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto

#human-robot-interaction News & Analysis

6 articles tagged with #human-robot-interaction. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.

6 articles
AIBullisharXiv โ€“ CS AI ยท 3d ago6/10
๐Ÿง 

Human-Inspired Context-Selective Multimodal Memory for Social Robots

Researchers have developed a context-selective, multimodal memory system for social robots that mimics human cognitive processes by prioritizing emotionally salient and novel experiences. The system combines text and visual data to enable personalized, context-aware interactions with users, outperforming existing memory models and maintaining real-time performance.

AIBullisharXiv โ€“ CS AI ยท Mar 96/10
๐Ÿง 

XR-DT: Extended Reality-Enhanced Digital Twin for Safe Motion Planning via Human-Aware Model Predictive Path Integral Control

Researchers developed XR-DT, an Extended Reality-enhanced Digital Twin framework that combines augmented, virtual, and mixed reality to improve human-robot interaction in shared workspaces. The system uses a novel Human-Aware Model Predictive Path Integral control model with ATLAS, a Transformer-based trajectory prediction system, to enable safer and more interpretable robot navigation around humans.

AIBullisharXiv โ€“ CS AI ยท Mar 36/106
๐Ÿง 

Monocular 3D Object Position Estimation with VLMs for Human-Robot Interaction

Researchers developed a Vision-Language Model capable of estimating 3D object positions from monocular RGB images for human-robot interaction. The model achieved a median accuracy of 13mm and can make acceptable predictions for robot interaction in 25% of cases, representing a five-fold improvement over baseline methods.

AIBullisharXiv โ€“ CS AI ยท Feb 276/103
๐Ÿง 

SignVLA: A Gloss-Free Vision-Language-Action Framework for Real-Time Sign Language-Guided Robotic Manipulation

Researchers have developed SignVLA, the first sign language-driven Vision-Language-Action framework for human-robot interaction that directly translates sign gestures into robotic commands without requiring intermediate gloss annotations. The system currently focuses on real-time alphabet-level finger-spelling for robotic control and is designed to support future expansion to word and sentence-level understanding.

AIBullisharXiv โ€“ CS AI ยท Mar 115/10
๐Ÿง 

Improving through Interaction: Searching Behavioral Representation Spaces with CMA-ES-IG

Researchers developed CMA-ES-IG, a new algorithm that helps robots learn user preferences more effectively by incorporating user experience considerations. The algorithm suggests perceptually distinct and informative robot behaviors for users to rank, showing improved scalability, computational efficiency, and user satisfaction compared to existing methods.

AINeutralarXiv โ€“ CS AI ยท Feb 274/107
๐Ÿง 

Evaluating Zero-Shot and One-Shot Adaptation of Small Language Models in Leader-Follower Interaction

Researchers benchmarked small language models (SLMs) for leader-follower role classification in human-robot interaction, finding that fine-tuned Qwen2.5-0.5B achieves 86.66% accuracy with 22.2ms latency. The study demonstrates SLMs can effectively handle real-time role assignment for resource-constrained robots, though performance degrades with increased dialogue complexity.