AINeutralarXiv – CS AI · 10h ago6/10
🧠
Large Language Models for Sequential Decision-Making: Improving In-Context Learning via Supervised Fine-Tuning
Researchers demonstrate that large language models can be effectively fine-tuned to perform sequential decision-making tasks across MDPs, POMDPs, and ambiguous environments by learning from offline trajectory data. The approach achieves stronger performance than baseline methods, particularly in complex, partially-observed scenarios, with theoretical analysis showing the fine-tuned attention mechanisms implicitly estimate optimal Q-functions.