AINeutralarXiv – CS AI · 6h ago6/10
🧠
Flexible Agent Alignment with Goal Inference from Open-Ended Dialog
Researchers introduce Open-Universe Assistance Games (OU-AGs), a framework enabling LLM-based agents to infer and align with human preferences through open-ended dialogue. The GOOD method extracts evolving goals from natural language interactions using probabilistic inference, demonstrating improved user intent alignment across shopping, robotics, and coding domains without requiring large offline datasets.