AIBullisharXiv โ CS AI ยท 4h ago6/10
๐ง
Decocted Experience Improves Test-Time Inference in LLM Agents
Researchers present a new approach to improve Large Language Model performance without updating model parameters by using 'decocted experience' - extracting and organizing key insights from previous interactions to guide better reasoning. The method shows effectiveness across reasoning tasks including math, web browsing, and software engineering by constructing better contextual inputs rather than simply scaling computational resources.