y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto

#llm-inference News & Analysis

18 articles tagged with #llm-inference. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.

18 articles
AIBullisharXiv – CS AI Β· 2d ago7/10
🧠

SpecBranch: Speculative Decoding via Hybrid Drafting and Rollback-Aware Branch Parallelism

SpecBranch introduces a novel speculative decoding framework that leverages branch parallelism to accelerate large language model inference, achieving 1.8x to 4.5x speedups over standard auto-regressive decoding. The technique addresses serialization bottlenecks in existing speculative decoding methods by implementing parallel drafting branches with adaptive token lengths and rollback-aware orchestration.

AIBullisharXiv – CS AI Β· 3d ago7/10
🧠

SPEED-Bench: A Unified and Diverse Benchmark for Speculative Decoding

Researchers introduce SPEED-Bench, a comprehensive benchmark suite for evaluating Speculative Decoding (SD) techniques that accelerate LLM inference. The benchmark addresses critical gaps in existing evaluation methods by offering diverse semantic domains, throughput-oriented testing across multiple concurrency levels, and integration with production systems like vLLM and TensorRT-LLM, enabling more accurate real-world performance measurement.

AIBullisharXiv – CS AI Β· 3d ago7/10
🧠

IceCache: Memory-efficient KV-cache Management for Long-Sequence LLMs

IceCache is a new memory management technique for large language models that reduces KV cache memory consumption by 75% while maintaining 99% accuracy on long-sequence tasks. The method combines semantic token clustering with PagedAttention to intelligently offload cache data between GPU and CPU, addressing a critical bottleneck in LLM inference on resource-constrained hardware.

AIBullisharXiv – CS AI Β· 4d ago7/10
🧠

CSAttention: Centroid-Scoring Attention for Accelerating LLM Inference

Researchers introduce CSAttention, a training-free sparse attention method that accelerates LLM inference by 4.6x for long-context applications. The technique optimizes the offline-prefill/online-decode workflow by precomputing query-centric lookup tables, enabling faster token generation without sacrificing accuracy even at 95% sparsity levels.

AIBullisharXiv – CS AI Β· Apr 107/10
🧠

Inference-Time Code Selection via Symbolic Equivalence Partitioning

Researchers propose Symbolic Equivalence Partitioning, a novel inference-time selection method for code generation that uses symbolic execution and SMT constraints to identify correct solutions without expensive external verifiers. The approach improves accuracy on HumanEval+ by 10.3% and on LiveCodeBench by 17.1% at N=10 without requiring additional LLM inference.

AIBullisharXiv – CS AI Β· Apr 107/10
🧠

Space Filling Curves is All You Need: Communication-Avoiding Matrix Multiplication Made Simple

Researchers present a new approach to General Matrix Multiplication (GEMM) using Space Filling Curves that automatically optimizes data movement across memory hierarchies without requiring platform-specific tuning. The method achieves up to 5.5x speedups over vendor libraries and demonstrates significant performance gains in LLM inference and distributed computing applications.

AIBullisharXiv – CS AI Β· Apr 67/10
🧠

Patterns behind Chaos: Forecasting Data Movement for Efficient Large-Scale MoE LLM Inference

Researchers analyzed data movement patterns in large-scale Mixture of Experts (MoE) language models (200B-1000B parameters) to optimize inference performance. Their findings led to architectural modifications achieving 6.6x speedups on wafer-scale GPUs and up to 1.25x improvements on existing systems through better expert placement algorithms.

🏒 Hugging Face
AIBullisharXiv – CS AI Β· Mar 277/10
🧠

Model2Kernel: Model-Aware Symbolic Execution For Safe CUDA Kernels

Researchers developed Model2Kernel, a system that automatically detects memory safety bugs in CUDA kernels used for large language model inference. The system discovered 353 previously unknown bugs across popular platforms like vLLM and Hugging Face with only nine false positives.

🏒 Hugging Face
AIBullisharXiv – CS AI Β· Mar 167/10
🧠

When Drafts Evolve: Speculative Decoding Meets Online Learning

Researchers introduce OnlineSpec, a framework that uses online learning to continuously improve draft models in speculative decoding for large language model inference acceleration. The approach leverages verification feedback to evolve draft models dynamically, achieving up to 24% speedup improvements across seven benchmarks and three foundation models.

AIBullisharXiv – CS AI Β· Mar 67/10
🧠

Agent Memory Below the Prompt: Persistent Q4 KV Cache for Multi-Agent LLM Inference on Edge Devices

Researchers developed a memory management system for multi-agent AI systems on edge devices that reduces memory requirements by 4x through 4-bit quantization and eliminates redundant computation by persisting KV caches to disk. The solution reduces time-to-first-token by up to 136x while maintaining minimal impact on model quality across three major language model architectures.

🏒 Perplexity🧠 Llama
AI Γ— CryptoBullisharXiv – CS AI Β· Mar 56/10
πŸ€–

A Multi-Dimensional Quality Scoring Framework for Decentralized LLM Inference with Proof of Quality

Researchers developed a multi-dimensional quality scoring framework for decentralized LLM inference networks that evaluates output quality across multiple dimensions including semantic quality and query-output alignment. The framework integrates with Proof of Quality (PoQ) mechanisms to provide better incentive alignment and defense against adversarial attacks in distributed AI compute networks.

AINeutralarXiv – CS AI Β· 3d ago6/10
🧠

ConfigSpec: Profiling-Based Configuration Selection for Distributed Edge--Cloud Speculative LLM Serving

ConfigSpec introduces a profiling-based framework for optimizing distributed LLM inference across edge-cloud systems using speculative decoding. The research reveals that no single configuration can simultaneously optimize throughput, cost efficiency, and energy efficiencyβ€”requiring dynamic, device-aware configuration selection rather than fixed deployments.

AINeutralarXiv – CS AI Β· 3d ago6/10
🧠

A-IO: Adaptive Inference Orchestration for Memory-Bound NPUs

A-IO addresses critical memory-bound bottlenecks in LLM deployment on NPU platforms like Ascend 910B by tackling the 'Model Scaling Paradox' and limitations of current speculative decoding techniques. The research reveals that static single-model deployment strategies and kernel synchronization overhead significantly constrain inference performance on heterogeneous accelerators.

AIBullisharXiv – CS AI Β· Mar 37/106
🧠

Towards Privacy-Preserving LLM Inference via Collaborative Obfuscation (Technical Report)

Researchers have developed AloePri, the first privacy-preserving LLM inference method designed for industrial applications. The system uses collaborative obfuscation to protect input/output data while maintaining 96.5-100% accuracy and resisting state-of-the-art attacks, successfully tested on a 671B parameter model.