y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto

#robotics News & Analysis

230 articles tagged with #robotics. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.

230 articles
AIBullishMIT News – AI · Dec 57/106
🧠

MIT researchers “speak objects into existence” using AI and robotics

MIT researchers have developed a speech-to-reality system that combines 3D generative AI with robotic assembly to create physical objects on demand from voice commands. The technology represents a significant advancement in AI-driven manufacturing and automation capabilities.

AIBullishGoogle DeepMind Blog · Oct 237/106
🧠

Gemini Robotics 1.5 brings AI agents into the physical world

Gemini Robotics 1.5 introduces AI agents capable of operating in physical environments, enabling robots to perceive, plan, think, use tools and act autonomously. This development represents a significant advancement in bringing artificial intelligence beyond digital interfaces into real-world applications for complex multi-step tasks.

AIBullishNVIDIA AI Blog · Aug 117/102
🧠

NVIDIA Research Shapes Physical AI

NVIDIA Research has achieved breakthroughs in neural rendering, 3D generation, and world simulation technologies that are advancing physical AI applications. These developments are enabling progress in robotics, autonomous vehicles, and content creation by providing more sophisticated AI-driven visual and simulation capabilities.

NVIDIA Research Shapes Physical AI
AIBullishHugging Face Blog · Apr 147/105
🧠

Hugging Face to sell open-source robots thanks to Pollen Robotics acquisition 🤖

Hugging Face has acquired Pollen Robotics to expand into the open-source robotics market, enabling the AI platform company to sell physical robots alongside its existing AI model ecosystem. This acquisition represents Hugging Face's strategic move to bridge software and hardware in the AI/robotics space.

AIBullishGoogle DeepMind Blog · Mar 127/106
🧠

Gemini Robotics brings AI into the physical world

Gemini Robotics has introduced AI models specifically designed for robots to understand, act, and react in physical environments. The announcement includes both Gemini Robotics and Gemini Robotics-ER variants for robotic applications.

AIBullishOpenAI News · Oct 157/105
🧠

Solving Rubik’s Cube with a robot hand

OpenAI has trained neural networks to solve a Rubik's Cube using a human-like robot hand, with training conducted entirely in simulation using reinforcement learning and a new technique called Automatic Domain Randomization (ADR). The system demonstrates unprecedented dexterity and can handle unexpected physical situations it never encountered during training, showing reinforcement learning's potential for complex real-world applications.

AIBullishOpenAI News · Nov 77/107
🧠

Learning concepts with energy functions

Researchers developed an energy-based AI model that can learn spatial concepts like 'near' and 'above' from just five demonstrations using 2D point sets. The model demonstrates cross-domain transfer capabilities, applying concepts learned in 2D particle environments to solve 3D physics-based robotics tasks.

$NEAR
AIBullishOpenAI News · Jul 307/106
🧠

Learning dexterity

Researchers have successfully trained a robot hand to manipulate physical objects with human-like dexterity, representing a significant breakthrough in robotics and AI. This advancement demonstrates unprecedented precision in robotic manipulation capabilities.

AIBullishOpenAI News · Oct 197/104
🧠

Generalizing from simulation

New robotics techniques enable robot controllers trained entirely in simulation to successfully operate on physical robots and adapt to unexpected environmental changes. This breakthrough represents a shift from open-loop to closed-loop robotic systems that can react dynamically to real-world conditions.

AIBullishOpenAI News · May 167/107
🧠

Robots that learn

A new robotics system has been developed that can learn new tasks after observing them just once, with training conducted entirely in simulation before deployment on physical robots. This represents a significant advancement in one-shot learning capabilities for robotics applications.

AIBullishOpenAI News · Apr 277/105
🧠

OpenAI Gym Beta

OpenAI has released the public beta of OpenAI Gym, a comprehensive toolkit designed for developing and comparing reinforcement learning algorithms. The platform includes a diverse suite of environments ranging from simulated robots to Atari games, along with a website for result comparison and reproducibility.

AIBullisharXiv – CS AI · 40m ago6/10
🧠

Unveiling the Surprising Efficacy of Navigation Understanding in End-to-End Autonomous Driving

Researchers propose Sequential Navigation Guidance (SNG), a framework addressing a critical flaw in end-to-end autonomous driving systems that over-rely on local scene understanding while underutilizing global navigation information. The SNG framework combines navigation paths and turn-by-turn instructions with a new VQA dataset and efficient model to improve autonomous vehicle planning and navigation-following in complex scenarios.

AINeutralAI News · 18h ago6/10
🧠

Hyundai expands into robotics and physical AI systems

Hyundai Motor Group is pivoting toward physical AI systems, integrating artificial intelligence into robots and machinery designed to operate in real-world environments. The company's current focus centers on factory and industrial applications, signaling a major shift in how the automotive giant approaches automation and manufacturing technology.

AIBullisharXiv – CS AI · 1d ago6/10
🧠

Self-Organizing Dual-Buffer Adaptive Clustering Experience Replay (SODACER) for Safe Reinforcement Learning in Optimal Control

Researchers introduce SODACER, a reinforcement learning framework combining dual-buffer experience replay with Control Barrier Functions to enable safe optimal control of nonlinear systems. The approach demonstrates improved convergence and sample efficiency while maintaining safety constraints, with potential applications in robotics, healthcare, and large-scale optimization.

AINeutralarXiv – CS AI · 1d ago6/10
🧠

EmbodiedGovBench: A Benchmark for Governance, Recovery, and Upgrade Safety in Embodied Agent Systems

Researchers introduce EmbodiedGovBench, a new evaluation framework for embodied AI systems that measures governance capabilities like controllability, policy compliance, and auditability rather than just task completion. The benchmark addresses a critical gap in AI safety by establishing standards for whether robot systems remain safe, recoverable, and responsive to human oversight under realistic failures.

AINeutralarXiv – CS AI · 1d ago6/10
🧠

Neuro-Symbolic Strong-AI Robots with Closed Knowledge Assumption: Learning and Deductions

This academic paper proposes a neuro-symbolic approach for AGI robots combining neural networks with formal logic reasoning using Belnap's 4-valued logic system. The framework enables robots to handle unknown information, inconsistencies, and paradoxes while maintaining controlled security through axiom-based logic inference.

AIBullisharXiv – CS AI · 1d ago6/10
🧠

StarVLA-$\alpha$: Reducing Complexity in Vision-Language-Action Systems

StarVLA-α introduces a simplified baseline architecture for Vision-Language-Action robotic systems that achieves competitive performance across multiple benchmarks without complex engineering. The model demonstrates that a strong vision-language backbone combined with minimal design choices can match or exceed existing specialized approaches, suggesting the VLA field has been over-engineered.

AINeutralarXiv – CS AI · 2d ago6/10
🧠

Mind the Gap Between Spatial Reasoning and Acting! Step-by-Step Evaluation of Agents With Spatial-Gym

Researchers introduce Spatial-Gym, a benchmarking environment that evaluates AI models on spatial reasoning tasks through step-by-step pathfinding in 2D grids rather than one-shot generation. Testing eight models reveals a significant performance gap, with the best model achieving only 16% solve rate versus 98% for humans, exposing critical limitations in how AI systems scale reasoning effort and process spatial information.

AINeutralarXiv – CS AI · 2d ago6/10
🧠

WOMBET: World Model-based Experience Transfer for Robust and Sample-efficient Reinforcement Learning

Researchers introduce WOMBET, a framework that improves reinforcement learning efficiency in robotics by generating synthetic training data from a world model in source tasks and selectively transferring it to target tasks. The approach combines offline-to-online learning with uncertainty-aware planning to reduce data collection costs while maintaining robustness.

AINeutralarXiv – CS AI · 2d ago6/10
🧠

Dejavu: Towards Experience Feedback Learning for Embodied Intelligence

Researchers introduce Dejavu, a post-deployment learning framework that enables frozen Vision-Language-Action policies to improve through experience retrieval and feedback networks. The system allows embodied AI agents to continuously learn from past trajectories without retraining, improving task performance across diverse robotic applications.

GeneralBullishTechCrunch – AI · 4d ago6/10
📰

TechCrunch is heading to Tokyo — and bringing the Startup Battlefield with it

TechCrunch is expanding its flagship Startup Battlefield event to Tokyo in 2026, focusing on four transformative technology domains: AI, Robotics, Resilience, and Entertainment. The event will feature live robot demonstrations, autonomous driving discussions, cybersecurity sessions, and industry conversations about AI's impact on music and anime.

$SUSHI
AIBullisharXiv – CS AI · 5d ago6/10
🧠

KITE: Keyframe-Indexed Tokenized Evidence for VLM-Based Robot Failure Analysis

KITE is a training-free system that converts long robot execution videos into compact, interpretable tokens for vision-language models to analyze robot failures. The approach combines keyframe extraction, open-vocabulary detection, and bird's-eye-view spatial representations to enable failure detection, identification, localization, and correction without requiring model fine-tuning.

AIBullisharXiv – CS AI · Apr 76/10
🧠

VLA-Forget: Vision-Language-Action Unlearning for Embodied Foundation Models

Researchers introduce VLA-Forget, a new unlearning framework for vision-language-action (VLA) models used in robotic manipulation. The hybrid approach addresses the challenge of removing unsafe or unwanted behaviors from embodied AI foundation models while preserving their core perception, language, and action capabilities.

AIBullishMicrosoft Research Blog · Mar 266/10
🧠

AsgardBench: A benchmark for visually grounded interactive planning

Microsoft Research introduces AsgardBench, a new benchmark for evaluating embodied AI systems that can perform visually grounded interactive planning. The benchmark focuses on testing robots' ability to observe environments, make decisions, and adapt when conditions change unexpectedly, using kitchen cleaning scenarios as examples.

← PrevPage 4 of 10Next →