y0news
← Feed
Back to feed
🧠 AI🟢 BullishImportance 7/10

ODMA: On-Demand Memory Allocation Strategy for LLM Serving on LPDDR-Class Accelerators

arXiv – CS AI|Guoqiang Zou, Wanyu Wang, Hao Zheng, Longxiang Yin, Yinhe Han|
🤖AI Summary

Researchers developed ODMA, a new memory allocation strategy that improves Large Language Model serving performance on memory-constrained accelerators by up to 27%. The technique addresses bandwidth limitations in LPDDR systems through adaptive bucket partitioning and dynamic generation-length prediction.

Key Takeaways
  • ODMA improves prediction accuracy from 98.60% to 99.55% on Alpaca and 82.68% to 93.36% on Google-NQ benchmarks.
  • The strategy increases KV-cache utilization by up to 19.25% and throughput by 23-27% over static baselines.
  • ODMA addresses critical limitations of existing memory management techniques on LPDDR-class accelerators with poor random-access bandwidth.
  • The approach uses adaptive bucket partitioning and fallback safety pools to handle distribution drift and heavy-tailed request patterns.
  • Testing was conducted with DeepSeek-R1-Distill-Qwen-7B on Cambricon MLU370-X4 accelerators, demonstrating real-world applicability.
Read Original →via arXiv – CS AI
Act on this with AI
Stay ahead of the market.
Connect your wallet to an AI agent. It reads balances, proposes swaps and bridges across 15 chains — you keep full control of your keys.
Connect Wallet to AI →How it works
Related Articles