AIBullisharXiv – CS AI · 6h ago6/10
🧠
Memory Inception: Latent-Space KV Cache Manipulation for Steering LLMs
Researchers introduce Memory Inception (MI), a training-free method for steering large language models by inserting text-derived key-value banks at selected attention layers rather than caching full prompts. MI achieves competitive control with instruction prompting while using up to 118x less storage and outperforms existing activation steering methods on personality, reasoning, and guidance tasks.