y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto

#robotics News & Analysis

229 articles tagged with #robotics. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.

229 articles
AIBullishTechCrunch – AI · Mar 117/10
🧠

Rivian spin-out Mind Robotics raises $500M for industrial AI-powered robots

Mind Robotics, a spin-out from Rivian founded by RJ Scaringe, has raised $500 million in funding to develop AI-powered industrial robots. The startup plans to leverage data from Rivian's manufacturing facilities to train its AI systems and deploy robotics solutions within the electric vehicle company's factories.

AIBullisharXiv – CS AI · 23h ago7/10
🧠

Minimal Embodiment Enables Efficient Learning of Number Concepts in Robot

Researchers demonstrate that robots equipped with minimal embodied sensorimotor capabilities learn numerical concepts significantly faster than vision-only systems, achieving 96.8% counting accuracy with 10% of training data. The embodied neural network spontaneously develops biologically plausible number representations matching human cognitive development, suggesting embodiment acts as a structural learning prior rather than merely an information source.

AIBullisharXiv – CS AI · 23h ago7/10
🧠

Grounded World Model for Semantically Generalizable Planning

Researchers propose Grounded World Model (GWM), a novel approach to visuomotor planning that aligns world models with vision-language embeddings rather than requiring explicit goal images. The method achieves 87% success on unseen tasks versus 22% for traditional vision-language action models, demonstrating superior semantic generalization in robotics and embodied AI applications.

AIBullisharXiv – CS AI · 23h ago7/10
🧠

TimeRewarder: Learning Dense Reward from Passive Videos via Frame-wise Temporal Distance

TimeRewarder is a new machine learning method that learns dense reward signals from passive videos to improve reinforcement learning in robotics. By modeling temporal distances between video frames, the approach achieves 90% success rates on Meta-World tasks using significantly fewer environment interactions than prior methods, while also leveraging human videos for scalable reward learning.

AIBullishDecrypt – AI · 1d ago7/10
🧠

Japan's Tech Titans Just Teamed Up to Build a Trillion-Parameter AI—And It's Not Here to Chat

Japan's largest tech companies—SoftBank, Sony, Honda, and NEC—have jointly established a new venture focused on developing trillion-parameter AI systems designed specifically for robotics and physical automation, securing $6.7 billion in Japanese government backing. This represents a strategic pivot away from conversational AI toward practical, embodied AI applications.

Japan's Tech Titans Just Teamed Up to Build a Trillion-Parameter AI—And It's Not Here to Chat
AIBullisharXiv – CS AI · 4d ago7/10
🧠

Towards provable probabilistic safety for scalable embodied AI systems

Researchers propose a shift from deterministic to probabilistic safety verification for embodied AI systems, arguing that provable probabilistic guarantees offer a more practical path to large-scale deployment in safety-critical applications like autonomous vehicles and robotics than the infeasible goal of absolute safety across all scenarios.

AIBullisharXiv – CS AI · Apr 77/10
🧠

Learning Dexterous Grasping from Sparse Taxonomy Guidance

Researchers developed GRIT, a two-stage AI framework that learns dexterous robotic grasping from sparse taxonomy guidance, achieving 87.9% success rate. The system first predicts grasp specifications from scene context, then generates finger motions while preserving intended grasp structure, improving generalization to novel objects.

AIBullisharXiv – CS AI · Apr 77/10
🧠

ROSClaw: A Hierarchical Semantic-Physical Framework for Heterogeneous Multi-Agent Collaboration

Researchers introduce ROSClaw, a new AI framework that integrates large language models with robotic systems to improve multi-agent collaboration and long-horizon task execution. The framework addresses critical gaps between semantic understanding and physical execution by using unified vision-language models and enabling real-time coordination between simulated and real-world robots.

AIBullisharXiv – CS AI · Apr 77/10
🧠

Build on Priors: Vision--Language--Guided Neuro-Symbolic Imitation Learning for Data-Efficient Real-World Robot Manipulation

Researchers have developed a neuro-symbolic framework that enables robots to learn complex manipulation tasks from as few as one demonstration, without requiring manual programming or large datasets. The system uses Vision-Language Models to automatically construct symbolic planning domains and has been validated on real industrial equipment including forklifts and robotic arms.

AIBullishCrypto Briefing · Apr 77/10
🧠

Greg Brockman: AGI will emerge in the next few years, OpenAI is shifting to real-world applications, and robotics will transform with AI integration | Big Technology

OpenAI co-founder Greg Brockman predicts AGI will emerge within the next few years and states that OpenAI is pivoting toward real-world applications. He emphasizes that AI integration will significantly transform robotics and that AGI could revolutionize intellectual tasks under a unified AI framework.

Greg Brockman: AGI will emerge in the next few years, OpenAI is shifting to real-world applications, and robotics will transform with AI integration | Big Technology
🏢 OpenAI
AINeutralarXiv – CS AI · Mar 267/10
🧠

Evidence of an Emergent "Self" in Continual Robot Learning

Researchers propose a method to identify 'self-awareness' in AI systems by analyzing invariant cognitive structures that remain stable during continual learning. Their study found that robots subjected to continual learning developed significantly more stable subnetworks compared to control groups, suggesting this could be evidence of an emergent 'self' concept.

AIBullisharXiv – CS AI · Mar 267/10
🧠

E0: Enhancing Generalization and Fine-Grained Control in VLA Models via Tweedie Discrete Diffusion

Researchers introduce E0, a new AI framework using tweedie discrete diffusion to improve Vision-Language-Action (VLA) models for robotic manipulation. The system addresses key limitations in existing VLA models by generating more precise actions through iterative denoising over quantized action tokens, achieving 10.7% better performance on average across 14 diverse robotic environments.

AIBullishBlockonomi · Mar 177/10
🧠

YZi Labs Backs RoboForce With $52M to Close the Industrial Labor Gap Through Physical AI

YZi Labs led a $52M funding round for RoboForce, which develops industrial AI robots including the TITAN model with 1mm precision for harsh environments. NVIDIA's CEO Jensen Huang featured RoboForce's TITAN robot at GTC 2025, providing significant validation for the company's Physical AI technology in industrial applications.

🏢 Nvidia
AIBullisharXiv – CS AI · Mar 177/10
🧠

From Passive Observer to Active Critic: Reinforcement Learning Elicits Process Reasoning for Robotic Manipulation

Researchers introduce PRIMO R1, a 7B parameter AI framework that transforms video MLLMs from passive observers into active critics for robotic manipulation tasks. The system uses reinforcement learning to achieve 50% better accuracy than specialized baselines and outperforms 72B-scale models, establishing state-of-the-art performance on the RoboFail benchmark.

🏢 OpenAI🧠 o1
AINeutralarXiv – CS AI · Mar 177/10
🧠

Eva-VLA: Evaluating Vision-Language-Action Models' Robustness Under Real-World Physical Variations

Researchers introduced Eva-VLA, the first unified framework to systematically evaluate the robustness of Vision-Language-Action models for robotic manipulation under real-world physical variations. Testing revealed OpenVLA exhibits over 90% failure rates across three physical variations, exposing critical weaknesses in current VLA models when deployed outside laboratory conditions.

AIBearisharXiv – CS AI · Mar 167/10
🧠

Altered Thoughts, Altered Actions: Probing Chain-of-Thought Vulnerabilities in VLA Robotic Manipulation

Research reveals critical vulnerabilities in Vision-Language-Action robotic models that use chain-of-thought reasoning, where corrupting object names in internal reasoning traces can reduce task success rates by up to 45%. The study shows these AI systems are vulnerable to attacks on their internal reasoning processes, even when primary inputs remain untouched.

AIBullisharXiv – CS AI · Mar 167/10
🧠

Active Causal Structure Learning with Latent Variables: Towards Learning to Detour in Autonomous Robots

Researchers propose Active Causal Structure Learning with Latent Variables (ACSLWL) as a necessary component for building AGI agents and robots. The paper demonstrates how this approach enables simulated robots to learn complex detour behaviors when encountering unexpected obstacles, allowing them to adapt to new environments by constructing internal causal models.

AINeutralBlockonomi · Mar 157/10
🧠

Elon Musk: AI Will Make Jobs Optional in the Coming Decades

Elon Musk predicts AI will make traditional jobs optional in coming decades as AI systems become capable of performing most tasks efficiently. He proposes Universal High Income as a solution, where automation reduces costs to basic material and electricity prices, creating abundance while requiring new mechanisms to distribute AI-generated wealth.

AIBullisharXiv – CS AI · Mar 117/10
🧠

PlayWorld: Learning Robot World Models from Autonomous Play

PlayWorld introduces a breakthrough AI system that trains robot world simulators entirely from autonomous robot self-play, eliminating the need for human demonstrations. The system achieves 40% improvements in failure prediction and 65% policy performance gains when deployed in real-world scenarios.

AIBearisharXiv – CS AI · Mar 117/10
🧠

When Robots Obey the Patch: Universal Transferable Patch Attacks on Vision-Language-Action Models

Researchers have developed UPA-RFAS, a new adversarial attack framework that can successfully fool Vision-Language-Action (VLA) models used in robotics with universal physical patches that transfer across different models and real-world scenarios. The attack exploits vulnerabilities in AI-powered robots by using patches that can hijack attention mechanisms and cause semantic misalignment between visual and text inputs.

AIBullisharXiv – CS AI · Mar 97/10
🧠

TADPO: Reinforcement Learning Goes Off-road

Researchers introduced TADPO, a novel reinforcement learning approach that extends PPO for autonomous off-road driving. The system achieved successful zero-shot sim-to-real transfer on a full-scale off-road vehicle, marking the first RL-based policy deployment on such a platform.

AIBearishTechCrunch – AI · Mar 77/10
🧠

OpenAI robotics lead Caitlin Kalinowski quits in response to Pentagon deal

Caitlin Kalinowski, OpenAI's robotics team leader, resigned from her position in protest of the company's controversial agreement with the Department of Defense. This represents a significant internal pushback against OpenAI's military partnerships from a key hardware executive.

🏢 OpenAI
AIBullisharXiv – CS AI · Mar 56/10
🧠

Cognition to Control - Multi-Agent Learning for Human-Humanoid Collaborative Transport

Researchers developed a new three-layer hierarchy called cognition-to-control (C2C) for human-robot collaboration that combines vision-language models with multi-agent reinforcement learning. The system enables sustained deliberation and planning while maintaining real-time control for collaborative manipulation tasks between humans and humanoid robots.

Page 1 of 10Next →