From History to State: Constant-Context Skill Learning for LLM Agents
Researchers propose constant-context skill learning, a framework enabling LLM agents to learn reusable task procedures as lightweight modules rather than storing long prompts in memory. The approach reduces token usage per inference by 2-7x while maintaining or improving performance across multiple benchmark environments, addressing the privacy-capability tradeoff in agent deployment.