AIBullisharXiv โ CS AI ยท 7h ago6/10
๐ง
Asynchronous Verified Semantic Caching for Tiered LLM Architectures
Researchers introduce Krites, an asynchronous caching system for Large Language Models that uses LLM judges to verify cached responses, improving efficiency without changing serving decisions. The system increases the fraction of requests served with curated static answers by up to 3.9 times while maintaining unchanged critical path latency.