AIBullisharXiv โ CS AI ยท 7h ago7/10
๐ง
Predictive Multi-Tier Memory Management for KV Cache in Large-Scale GPU Inference
Researchers present a unified system for optimizing KV cache memory management in large-scale GPU inference, addressing three critical inefficiencies through architecture-aware sizing, multi-tier memory hierarchy spanning CPU to NVMe storage, and predictive eviction policies. The approach achieves 70-84% cache hit rates and projects 1.4-2.1x improvements in latency and 1.7-2.9x throughput gains while reducing costs by 47% compared to existing solutions.