y0news
← Feed
Back to feed
🧠 AI🟢 Bullish

Learning Physical Principles from Interaction: Self-Evolving Planning via Test-Time Memory

arXiv – CS AI|Haoyang Li, Yang You, Hao Su, Leonidas Guibas|
🤖AI Summary

Researchers introduce PhysMem, a memory framework that enables vision-language model robot planners to learn physical principles through real-time interaction without updating model parameters. The system records experiences, generates hypotheses, and verifies them before application, achieving 76% success on brick insertion tasks compared to 23% for direct experience retrieval.

Key Takeaways
  • PhysMem allows VLM robot planners to learn physical properties through test-time interaction without parameter updates.
  • The system uses a verification-before-application approach to test hypotheses against new observations.
  • Performance improved significantly with 76% success rate versus 23% for direct experience retrieval on controlled tasks.
  • The framework was evaluated across four VLM backbones on real-world manipulation tasks and simulations.
  • Real-world experiments showed consistent improvement over 30-minute deployment sessions.
Read Original →via arXiv – CS AI
Act on this with AI
Stay ahead of the market.
Connect your wallet to an AI agent. It reads balances, proposes swaps and bridges across 15 chains — you keep full control of your keys.
Connect Wallet to AI →How it works
Related Articles