y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto

#memory-management News & Analysis

22 articles tagged with #memory-management. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.

22 articles
AIBullisharXiv – CS AI · Apr 77/10
🧠

LightThinker++: From Reasoning Compression to Memory Management

Researchers developed LightThinker++, a new framework that enables large language models to compress intermediate reasoning thoughts and manage memory more efficiently. The system reduces peak token usage by up to 70% while improving accuracy by 2.42% and maintaining performance over extended reasoning tasks.

AIBullisharXiv – CS AI · Mar 267/10
🧠

ODMA: On-Demand Memory Allocation Strategy for LLM Serving on LPDDR-Class Accelerators

Researchers developed ODMA, a new memory allocation strategy that improves Large Language Model serving performance on memory-constrained accelerators by up to 27%. The technique addresses bandwidth limitations in LPDDR systems through adaptive bucket partitioning and dynamic generation-length prediction.

AIBullisharXiv – CS AI · Mar 177/10
🧠

StatePlane: A Cognitive State Plane for Long-Horizon AI Systems Under Bounded Context

Researchers introduce StatePlane, a model-agnostic cognitive state management system that enables AI systems to maintain coherent reasoning over long interaction horizons without expanding context windows or retraining models. The system uses episodic, semantic, and procedural memory mechanisms inspired by cognitive psychology to overcome current limitations in large language models.

AIBullisharXiv – CS AI · Mar 177/10
🧠

Justitia: Fair and Efficient Scheduling of Task-parallel LLM Agents with Selective Pampering

Justitia is a new scheduling system for task-parallel LLM agents that optimizes GPU server performance through selective resource allocation based on completion order prediction. The system uses memory-centric cost quantification and virtual-time fair queuing to achieve both efficiency and fairness in LLM serving environments.

🏢 Meta
AIBullisharXiv – CS AI · Mar 117/10
🧠

The Missing Memory Hierarchy: Demand Paging for LLM Context Windows

Researchers developed Pichay, a demand paging system that treats LLM context windows like computer memory with hierarchical caching. The system reduces context consumption by up to 93% in production by evicting stale content and managing memory more efficiently, addressing fundamental scalability issues in AI systems.

AIBullisharXiv – CS AI · Mar 67/10
🧠

AMV-L: Lifecycle-Managed Agent Memory for Tail-Latency Control in Long-Running LLM Systems

Researchers introduce AMV-L, a new memory management framework for long-running LLM systems that uses utility-based lifecycle management instead of traditional time-based retention. The system improves throughput by 3.1x and reduces latency by up to 4.7x while maintaining retrieval quality by controlling memory working-set size rather than just retention time.

AIBullisharXiv – CS AI · Mar 47/102
🧠

Neural Paging: Learning Context Management Policies for Turing-Complete Agents

Researchers introduce Neural Paging, a new architecture that addresses the computational bottleneck of finite context windows in Large Language Models by implementing a hierarchical system that decouples reasoning from memory management. The approach reduces computational complexity from O(N²) to O(N·K²) for long-horizon reasoning tasks, potentially enabling more efficient AI agents.

AIBullisharXiv – CS AI · Mar 37/103
🧠

FreeKV: Boosting KV Cache Retrieval for Efficient LLM Inference

Researchers introduce FreeKV, a training-free optimization framework that dramatically improves KV cache retrieval efficiency for large language models with long context windows. The system achieves up to 13x speedup compared to existing methods while maintaining near-lossless accuracy through speculative retrieval and hybrid memory layouts.

$NEAR
AIBullisharXiv – CS AI · Feb 277/107
🧠

Contextual Memory Virtualisation: DAG-Based State Management and Structurally Lossless Trimming for LLM Agents

Researchers introduce Contextual Memory Virtualisation (CMV), a system that preserves LLM understanding across extended sessions by treating context as version-controlled state using DAG-based management. The system includes a trimming algorithm that reduces token counts by 20-86% while preserving all user interactions, demonstrating particular efficiency in tool-use sessions.

AINeutralMicrosoft Research Blog · Mar 106/10
🧠

From raw interaction to reusable knowledge: Rethinking memory for AI agents

Microsoft Research highlights a counterintuitive problem where giving AI agents more memory actually reduces their effectiveness. As interaction logs accumulate, they become large, filled with irrelevant content, and difficult to search through, making it harder for agents to find relevant information for current tasks.

AIBullisharXiv – CS AI · Mar 66/10
🧠

Adaptive Memory Admission Control for LLM Agents

Researchers propose Adaptive Memory Admission Control (A-MAC), a new framework for managing long-term memory in LLM-based agents. The system improves memory precision-recall by 31% while reducing latency through structured decision-making based on five interpretable factors rather than opaque LLM-driven policies.

AIBullisharXiv – CS AI · Mar 37/107
🧠

Semantic XPath: Structured Agentic Memory Access for Conversational AI

Researchers have developed Semantic XPath, a tree-structured memory system for conversational AI that improves performance by 176.7% over traditional methods while using only 9.1% of the tokens. The system addresses scalability issues in long-term AI conversations by efficiently accessing and updating structured memory instead of appending growing conversation history.

AIBullisharXiv – CS AI · Mar 36/107
🧠

ActMem: Bridging the Gap Between Memory Retrieval and Reasoning in LLM Agents

Researchers propose ActMem, a novel memory framework for LLM agents that combines memory retrieval with active causal reasoning to handle complex decision-making scenarios. The framework transforms dialogue history into structured causal graphs and uses counterfactual reasoning to resolve conflicts between past states and current intentions, significantly outperforming existing baselines in memory-dependent tasks.

AINeutralarXiv – CS AI · Mar 36/104
🧠

AMemGym: Interactive Memory Benchmarking for Assistants in Long-Horizon Conversations

Researchers introduce AMemGym, an interactive benchmarking environment for evaluating and optimizing memory management in long-horizon conversations with AI assistants. The framework addresses limitations in current memory evaluation methods by enabling on-policy testing with LLM-simulated users and revealing performance gaps in existing memory systems like RAG and long-context LLMs.

AIBullisharXiv – CS AI · Mar 36/104
🧠

OrbitFlow: SLO-Aware Long-Context LLM Serving with Fine-Grained KV Cache Reconfiguration

OrbitFlow is a new KV cache management system for long-context LLM serving that uses adaptive memory allocation and fine-grained optimization to improve performance. The system achieves up to 66% better SLO attainment and 3.3x higher throughput by dynamically managing GPU memory usage during token generation.

AIBullisharXiv – CS AI · Mar 36/103
🧠

PiKV: KV Cache Management System for Mixture of Experts

Researchers have introduced PiKV, an open-source KV cache management framework designed to optimize memory and communication costs for Mixture of Experts (MoE) language models across multi-GPU and multi-node inference. The system uses expert-sharded storage, intelligent routing, adaptive scheduling, and compression to improve efficiency in large-scale AI model deployment.

AIBullisharXiv – CS AI · Mar 27/1011
🧠

KEEP: A KV-Cache-Centric Memory Management System for Efficient Embodied Planning

Researchers from PKU-SEC-Lab have developed KEEP, a new memory management system that significantly improves the efficiency of AI-powered embodied planning by optimizing KV cache usage. The system achieves 2.68x speedup compared to text-based memory methods while maintaining accuracy, addressing a key bottleneck in memory-augmented Large Language Models for complex planning tasks.

AIBullisharXiv – CS AI · Feb 276/106
🧠

SideQuest: Model-Driven KV Cache Management for Long-Horizon Agentic Reasoning

Researchers introduce SideQuest, a novel KV cache management system that uses Large Reasoning Models to compress memory usage during long-horizon AI tasks. The system reduces peak token usage by up to 65% while maintaining accuracy by having the model itself determine which tokens are useful to keep in memory.

AINeutralLil'Log (Lilian Weng) · Sep 246/10
🧠

How to Train Really Large Models on Many GPUs?

This article reviews training parallelism paradigms and memory optimization techniques for training very large neural networks across multiple GPUs. It covers architectural designs and methods to overcome GPU memory limitations and extended training times for deep learning models.

🏢 OpenAI
AINeutralHugging Face Blog · Dec 244/106
🧠

Visualize and understand GPU memory in PyTorch

The article appears to be a technical guide focused on visualizing and understanding GPU memory usage in PyTorch, a popular machine learning framework. This type of content typically helps developers optimize their AI model training and deployment by better managing memory resources.