AIBullisharXiv โ CS AI ยท 14h ago7/10
๐ง
IceCache: Memory-efficient KV-cache Management for Long-Sequence LLMs
IceCache is a new memory management technique for large language models that reduces KV cache memory consumption by 75% while maintaining 99% accuracy on long-sequence tasks. The method combines semantic token clustering with PagedAttention to intelligently offload cache data between GPU and CPU, addressing a critical bottleneck in LLM inference on resource-constrained hardware.