y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto

#embodied-ai News & Analysis

49 articles tagged with #embodied-ai. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.

49 articles
AIBullisharXiv – CS AI · 2d ago7/10
🧠

Grounded World Model for Semantically Generalizable Planning

Researchers propose Grounded World Model (GWM), a novel approach to visuomotor planning that aligns world models with vision-language embeddings rather than requiring explicit goal images. The method achieves 87% success on unseen tasks versus 22% for traditional vision-language action models, demonstrating superior semantic generalization in robotics and embodied AI applications.

AIBullisharXiv – CS AI · 2d ago7/10
🧠

TimeRewarder: Learning Dense Reward from Passive Videos via Frame-wise Temporal Distance

TimeRewarder is a new machine learning method that learns dense reward signals from passive videos to improve reinforcement learning in robotics. By modeling temporal distances between video frames, the approach achieves 90% success rates on Meta-World tasks using significantly fewer environment interactions than prior methods, while also leveraging human videos for scalable reward learning.

AIBullishDecrypt – AI · 2d ago7/10
🧠

Japan's Tech Titans Just Teamed Up to Build a Trillion-Parameter AI—And It's Not Here to Chat

Japan's largest tech companies—SoftBank, Sony, Honda, and NEC—have jointly established a new venture focused on developing trillion-parameter AI systems designed specifically for robotics and physical automation, securing $6.7 billion in Japanese government backing. This represents a strategic pivot away from conversational AI toward practical, embodied AI applications.

Japan's Tech Titans Just Teamed Up to Build a Trillion-Parameter AI—And It's Not Here to Chat
AINeutralarXiv – CS AI · 3d ago7/10
🧠

PilotBench: A Benchmark for General Aviation Agents with Safety Constraints

Researchers introduce PilotBench, a benchmark evaluating large language models on safety-critical aviation tasks using 708 real-world flight trajectories. The study reveals a fundamental trade-off: traditional forecasters achieve superior numerical precision (7.01 MAE) while LLMs provide better instruction-following (86-89%) but with significantly degraded prediction accuracy (11-14 MAE), exposing brittleness in implicit physics reasoning for embodied AI applications.

AIBullisharXiv – CS AI · 3d ago7/10
🧠

Commanding Humanoid by Free-form Language: A Large Language Action Model with Unified Motion Vocabulary

Researchers introduce Humanoid-LLA, a Large Language Action Model enabling humanoid robots to execute complex physical tasks from natural language commands. The system combines a unified motion vocabulary, physics-aware controller, and reinforcement learning to achieve both language understanding and real-world robot control, demonstrating improved performance on Unitree G1 and Booster T1 humanoids.

AIBullisharXiv – CS AI · 3d ago7/10
🧠

PhysInOne: Visual Physics Learning and Reasoning in One Suite

PhysInOne is a large-scale synthetic dataset containing 2 million videos across 153,810 dynamic 3D scenes designed to address the scarcity of physics-grounded training data for AI systems. The dataset covers 71 physical phenomena and includes comprehensive annotations, demonstrating significant improvements in physics-aware video generation, prediction, and property estimation when used to fine-tune foundation models.

AIBullisharXiv – CS AI · 6d ago7/10
🧠

Towards provable probabilistic safety for scalable embodied AI systems

Researchers propose a shift from deterministic to probabilistic safety verification for embodied AI systems, arguing that provable probabilistic guarantees offer a more practical path to large-scale deployment in safety-critical applications like autonomous vehicles and robotics than the infeasible goal of absolute safety across all scenarios.

AIBullisharXiv – CS AI · Apr 77/10
🧠

ROSClaw: A Hierarchical Semantic-Physical Framework for Heterogeneous Multi-Agent Collaboration

Researchers introduce ROSClaw, a new AI framework that integrates large language models with robotic systems to improve multi-agent collaboration and long-horizon task execution. The framework addresses critical gaps between semantic understanding and physical execution by using unified vision-language models and enabling real-time coordination between simulated and real-world robots.

AIBullisharXiv – CS AI · Mar 127/10
🧠

Are Video Reasoning Models Ready to Go Outside?

Researchers propose ROVA, a new training framework that improves vision-language models' robustness in real-world conditions by up to 24% accuracy gains. The framework addresses performance degradation from weather, occlusion, and camera motion that can cause up to 35% accuracy drops in current models.

AIBullisharXiv – CS AI · Mar 56/10
🧠

Cognition to Control - Multi-Agent Learning for Human-Humanoid Collaborative Transport

Researchers developed a new three-layer hierarchy called cognition-to-control (C2C) for human-robot collaboration that combines vision-language models with multi-agent reinforcement learning. The system enables sustained deliberation and planning while maintaining real-time control for collaborative manipulation tasks between humans and humanoid robots.

AIBullisharXiv – CS AI · Mar 46/102
🧠

Chain of World: World Model Thinking in Latent Motion

Researchers introduce CoWVLA (Chain-of-World VLA), a new Vision-Language-Action model paradigm that combines world-model temporal reasoning with latent motion representation for embodied AI. The approach outperforms existing methods in robotic simulation benchmarks while maintaining computational efficiency through a unified autoregressive decoder that models both keyframes and action sequences.

AIBullisharXiv – CS AI · Mar 47/103
🧠

D2E: Scaling Vision-Action Pretraining on Desktop Data for Transfer to Embodied AI

Researchers developed D2E (Desktop to Embodied AI), a framework that uses desktop gaming data to pretrain AI models for robotics tasks. Their 1B-parameter model achieved 96.6% success on manipulation tasks and 83.3% on navigation, matching performance of models up to 7 times larger while using scalable desktop data instead of expensive physical robot training data.

AIBullisharXiv – CS AI · Mar 47/104
🧠

Retrieval-Augmented Robots via Retrieve-Reason-Act

Researchers introduce Retrieval-Augmented Robotics (RAR), a new paradigm enabling robots to actively retrieve and use external visual documentation to execute complex tasks. The system uses a Retrieve-Reason-Act loop where robots search unstructured visual manuals, align 2D diagrams with 3D objects, and synthesize executable plans for assembly tasks.

AIBullisharXiv – CS AI · Mar 46/103
🧠

MA-CoNav: A Master-Slave Multi-Agent Framework with Hierarchical Collaboration and Dual-Level Reflection for Long-Horizon Embodied VLN

Researchers propose MA-CoNav, a multi-agent collaborative framework for robot navigation that uses a Master-Slave architecture to distribute cognitive tasks among specialized agents. The system outperforms existing Vision-Language Navigation methods by decoupling perception, planning, execution, and memory functions across different AI agents with hierarchical collaboration.

AIBullisharXiv – CS AI · Mar 37/103
🧠

UrbanVerse: Scaling Urban Simulation by Watching City-Tour Videos

UrbanVerse introduces a data-driven system that converts city-tour videos into realistic urban simulation environments for training AI agents like delivery robots. The system includes 100K+ annotated 3D urban assets and shows significant improvements in navigation success rates, with +30.1% better performance in real-world transfers.

AIBullisharXiv – CS AI · 1d ago6/10
🧠

Human-Inspired Context-Selective Multimodal Memory for Social Robots

Researchers have developed a context-selective, multimodal memory system for social robots that mimics human cognitive processes by prioritizing emotionally salient and novel experiences. The system combines text and visual data to enable personalized, context-aware interactions with users, outperforming existing memory models and maintaining real-time performance.

AINeutralarXiv – CS AI · 2d ago6/10
🧠

OOWM: Structuring Embodied Reasoning and Planning via Object-Oriented Programmatic World Modeling

Researchers introduce Object-Oriented World Modeling (OOWM), a framework that structures LLM reasoning for robotic planning by replacing linear text with explicit symbolic representations using UML diagrams and object hierarchies. The approach combines supervised fine-tuning with group relative policy optimization to achieve superior planning performance on embodied tasks, demonstrating that formal software engineering principles can enhance AI reasoning capabilities.

AINeutralarXiv – CS AI · 2d ago6/10
🧠

EmbodiedGovBench: A Benchmark for Governance, Recovery, and Upgrade Safety in Embodied Agent Systems

Researchers introduce EmbodiedGovBench, a new evaluation framework for embodied AI systems that measures governance capabilities like controllability, policy compliance, and auditability rather than just task completion. The benchmark addresses a critical gap in AI safety by establishing standards for whether robot systems remain safe, recoverable, and responsive to human oversight under realistic failures.

AINeutralarXiv – CS AI · 3d ago6/10
🧠

Dejavu: Towards Experience Feedback Learning for Embodied Intelligence

Researchers introduce Dejavu, a post-deployment learning framework that enables frozen Vision-Language-Action policies to improve through experience retrieval and feedback networks. The system allows embodied AI agents to continuously learn from past trajectories without retraining, improving task performance across diverse robotic applications.

AINeutralarXiv – CS AI · 6d ago6/10
🧠

Toward Memory-Aided World Models: Benchmarking via Spatial Consistency

Researchers introduced a new benchmark dataset for evaluating world models' ability to maintain spatial consistency across long sequences, addressing a critical gap in AI evaluation. The dataset, collected from Minecraft environments with 20 million frames across 150 locations, enables development of memory-augmented models that can reliably simulate physical spaces for downstream tasks like planning and simulation.

AIBullisharXiv – CS AI · Apr 76/10
🧠

VLA-Forget: Vision-Language-Action Unlearning for Embodied Foundation Models

Researchers introduce VLA-Forget, a new unlearning framework for vision-language-action (VLA) models used in robotic manipulation. The hybrid approach addresses the challenge of removing unsafe or unwanted behaviors from embodied AI foundation models while preserving their core perception, language, and action capabilities.

AIBullisharXiv – CS AI · Mar 276/10
🧠

Scalable Object Relation Encoding for Better 3D Spatial Reasoning in Large Language Models

Researchers introduce QuatRoPE, a novel positional embedding method that improves 3D spatial reasoning in Large Language Models by encoding object relations more efficiently. The method maintains linear scalability with the number of objects and preserves LLMs' original capabilities through the Isolated Gated RoPE Extension.

AIBullishMicrosoft Research Blog · Mar 266/10
🧠

AsgardBench: A benchmark for visually grounded interactive planning

Microsoft Research introduces AsgardBench, a new benchmark for evaluating embodied AI systems that can perform visually grounded interactive planning. The benchmark focuses on testing robots' ability to observe environments, make decisions, and adapt when conditions change unexpectedly, using kitchen cleaning scenarios as examples.

Page 1 of 2Next →