AIBullisharXiv โ CS AI ยท 14h ago7/10
๐ง
SPEED-Bench: A Unified and Diverse Benchmark for Speculative Decoding
Researchers introduce SPEED-Bench, a comprehensive benchmark suite for evaluating Speculative Decoding (SD) techniques that accelerate LLM inference. The benchmark addresses critical gaps in existing evaluation methods by offering diverse semantic domains, throughput-oriented testing across multiple concurrency levels, and integration with production systems like vLLM and TensorRT-LLM, enabling more accurate real-world performance measurement.