22 articles tagged with #memory-management. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.
AIBullisharXiv – CS AI · Apr 77/10
🧠Researchers developed LightThinker++, a new framework that enables large language models to compress intermediate reasoning thoughts and manage memory more efficiently. The system reduces peak token usage by up to 70% while improving accuracy by 2.42% and maintaining performance over extended reasoning tasks.
AIBullisharXiv – CS AI · Mar 267/10
🧠Researchers developed ODMA, a new memory allocation strategy that improves Large Language Model serving performance on memory-constrained accelerators by up to 27%. The technique addresses bandwidth limitations in LPDDR systems through adaptive bucket partitioning and dynamic generation-length prediction.
AIBullisharXiv – CS AI · Mar 177/10
🧠Researchers introduce StatePlane, a model-agnostic cognitive state management system that enables AI systems to maintain coherent reasoning over long interaction horizons without expanding context windows or retraining models. The system uses episodic, semantic, and procedural memory mechanisms inspired by cognitive psychology to overcome current limitations in large language models.
AIBullisharXiv – CS AI · Mar 177/10
🧠Justitia is a new scheduling system for task-parallel LLM agents that optimizes GPU server performance through selective resource allocation based on completion order prediction. The system uses memory-centric cost quantification and virtual-time fair queuing to achieve both efficiency and fairness in LLM serving environments.
🏢 Meta
AIBullisharXiv – CS AI · Mar 127/10
🧠Researchers introduce MoE-SpAc, a new framework for efficient Mixture-of-Experts model inference on edge devices that achieves 42% improvement over existing baselines. The system uses speculative decoding as a memory management tool and demonstrates 4.04x average speedup across benchmarks.
AIBullisharXiv – CS AI · Mar 117/10
🧠Researchers developed Pichay, a demand paging system that treats LLM context windows like computer memory with hierarchical caching. The system reduces context consumption by up to 93% in production by evicting stale content and managing memory more efficiently, addressing fundamental scalability issues in AI systems.
AIBullisharXiv – CS AI · Mar 67/10
🧠Researchers introduce AMV-L, a new memory management framework for long-running LLM systems that uses utility-based lifecycle management instead of traditional time-based retention. The system improves throughput by 3.1x and reduces latency by up to 4.7x while maintaining retrieval quality by controlling memory working-set size rather than just retention time.
AIBullisharXiv – CS AI · Mar 47/102
🧠Researchers introduce Neural Paging, a new architecture that addresses the computational bottleneck of finite context windows in Large Language Models by implementing a hierarchical system that decouples reasoning from memory management. The approach reduces computational complexity from O(N²) to O(N·K²) for long-horizon reasoning tasks, potentially enabling more efficient AI agents.
AIBullisharXiv – CS AI · Mar 37/103
🧠Researchers introduce FreeKV, a training-free optimization framework that dramatically improves KV cache retrieval efficiency for large language models with long context windows. The system achieves up to 13x speedup compared to existing methods while maintaining near-lossless accuracy through speculative retrieval and hybrid memory layouts.
$NEAR
AIBullisharXiv – CS AI · Feb 277/107
🧠Researchers introduce Contextual Memory Virtualisation (CMV), a system that preserves LLM understanding across extended sessions by treating context as version-controlled state using DAG-based management. The system includes a trimming algorithm that reduces token counts by 20-86% while preserving all user interactions, demonstrating particular efficiency in tool-use sessions.
AIBullishMarkTechPost · Mar 157/10
🧠OpenViking is an open-source context database from Volcengine that revolutionizes how AI agents manage context by organizing it through a filesystem paradigm rather than flat text chunks. The system aims to make memory, resources, and skills manageable through a unified architecture for AI agent systems like OpenClaw.
AINeutralMicrosoft Research Blog · Mar 106/10
🧠Microsoft Research highlights a counterintuitive problem where giving AI agents more memory actually reduces their effectiveness. As interaction logs accumulate, they become large, filled with irrelevant content, and difficult to search through, making it harder for agents to find relevant information for current tasks.
AIBullisharXiv – CS AI · Mar 66/10
🧠Researchers propose Adaptive Memory Admission Control (A-MAC), a new framework for managing long-term memory in LLM-based agents. The system improves memory precision-recall by 31% while reducing latency through structured decision-making based on five interpretable factors rather than opaque LLM-driven policies.
AIBullisharXiv – CS AI · Mar 37/107
🧠Researchers have developed Semantic XPath, a tree-structured memory system for conversational AI that improves performance by 176.7% over traditional methods while using only 9.1% of the tokens. The system addresses scalability issues in long-term AI conversations by efficiently accessing and updating structured memory instead of appending growing conversation history.
AIBullisharXiv – CS AI · Mar 36/107
🧠Researchers propose ActMem, a novel memory framework for LLM agents that combines memory retrieval with active causal reasoning to handle complex decision-making scenarios. The framework transforms dialogue history into structured causal graphs and uses counterfactual reasoning to resolve conflicts between past states and current intentions, significantly outperforming existing baselines in memory-dependent tasks.
AINeutralarXiv – CS AI · Mar 36/104
🧠Researchers introduce AMemGym, an interactive benchmarking environment for evaluating and optimizing memory management in long-horizon conversations with AI assistants. The framework addresses limitations in current memory evaluation methods by enabling on-policy testing with LLM-simulated users and revealing performance gaps in existing memory systems like RAG and long-context LLMs.
AIBullisharXiv – CS AI · Mar 36/104
🧠OrbitFlow is a new KV cache management system for long-context LLM serving that uses adaptive memory allocation and fine-grained optimization to improve performance. The system achieves up to 66% better SLO attainment and 3.3x higher throughput by dynamically managing GPU memory usage during token generation.
AIBullisharXiv – CS AI · Mar 36/103
🧠Researchers have introduced PiKV, an open-source KV cache management framework designed to optimize memory and communication costs for Mixture of Experts (MoE) language models across multi-GPU and multi-node inference. The system uses expert-sharded storage, intelligent routing, adaptive scheduling, and compression to improve efficiency in large-scale AI model deployment.
AIBullisharXiv – CS AI · Mar 27/1011
🧠Researchers from PKU-SEC-Lab have developed KEEP, a new memory management system that significantly improves the efficiency of AI-powered embodied planning by optimizing KV cache usage. The system achieves 2.68x speedup compared to text-based memory methods while maintaining accuracy, addressing a key bottleneck in memory-augmented Large Language Models for complex planning tasks.
AIBullisharXiv – CS AI · Feb 276/106
🧠Researchers introduce SideQuest, a novel KV cache management system that uses Large Reasoning Models to compress memory usage during long-horizon AI tasks. The system reduces peak token usage by up to 65% while maintaining accuracy by having the model itself determine which tokens are useful to keep in memory.
AINeutralLil'Log (Lilian Weng) · Sep 246/10
🧠This article reviews training parallelism paradigms and memory optimization techniques for training very large neural networks across multiple GPUs. It covers architectural designs and methods to overcome GPU memory limitations and extended training times for deep learning models.
🏢 OpenAI
AINeutralHugging Face Blog · Dec 244/106
🧠The article appears to be a technical guide focused on visualizing and understanding GPU memory usage in PyTorch, a popular machine learning framework. This type of content typically helps developers optimize their AI model training and deployment by better managing memory resources.