11 articles tagged with #memory-architecture. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.
AIBullisharXiv – CS AI · 3d ago7/10
🧠Researchers introduce soul.py, an open-source architecture addressing catastrophic forgetting in AI agents by distributing identity across multiple memory systems rather than centralizing it. The framework implements persistent identity through separable components and a hybrid RAG+RLM retrieval system, drawing inspiration from how human memory survives neurological damage.
AIBullisharXiv – CS AI · 3d ago7/10
🧠Researchers present Synthius-Mem, a brain-inspired AI memory system that achieves 94.4% accuracy on the LoCoMo benchmark while maintaining 99.6% adversarial robustness—preventing hallucinations about facts users never shared. The system outperforms existing approaches by structuring persona extraction across six cognitive domains rather than treating memory as raw dialogue retrieval, reducing token consumption by 5x.
AIBullisharXiv – CS AI · 3d ago7/10
🧠Researchers propose VaCoAl, a hyperdimensional computing architecture that combines sparse distributed memory with Galois-field algebra to address limitations in modern AI systems like catastrophic forgetting and the binding problem. The deterministic system demonstrates emergent properties equivalent to spike-timing-dependent plasticity and achieves multi-hop reasoning across 25.5M paths in knowledge graphs, positioning it as a complementary third paradigm to large language models.
AIBullisharXiv – CS AI · 3d ago7/10
🧠Researchers propose MGA (Memory-Driven GUI Agent), a minimalist AI framework that improves GUI automation by decoupling long-horizon tasks into independent steps linked through structured state memory. The approach addresses critical limitations in current multimodal AI agents—context overload and architectural redundancy—while maintaining competitive performance with reduced complexity.
AIBullisharXiv – CS AI · Apr 77/10
🧠Researchers introduce LLMA-Mem, a memory framework for LLM multi-agent systems that balances team size with lifelong learning capabilities. The study reveals that larger agent teams don't always perform better long-term, and smaller teams with better memory design can outperform larger ones while reducing costs.
AIBullisharXiv – CS AI · Mar 177/10
🧠Researchers introduce D-MEM, a biologically-inspired memory architecture for AI agents that uses dopamine-like reward prediction error routing to dramatically reduce computational costs. The system reduces token consumption by over 80% and eliminates quadratic scaling bottlenecks by selectively processing only high-importance information through cognitive restructuring.
AIBullisharXiv – CS AI · Mar 127/10
🧠Researchers developed HyMEM, a brain-inspired hybrid memory system that significantly improves GUI agents' ability to interact with computers. The system uses graph-based structured memory combining symbolic nodes with trajectory embeddings, enabling smaller 7B/8B models to match or exceed performance of larger closed-source models like GPT-4o.
🧠 GPT-4
AIBullisharXiv – CS AI · Mar 127/10
🧠Researchers developed KernelSkill, a multi-agent framework that optimizes GPU kernel performance using expert knowledge rather than trial-and-error approaches. The system achieved 100% success rates and significant speedups (1.92x to 5.44x) over existing methods, addressing a critical bottleneck in AI system efficiency.
AIBullisharXiv – CS AI · Mar 67/10
🧠Researchers propose a new 'Memory-as-Ontology' paradigm for AI agents that treats memory as the foundation of digital existence rather than just a functional tool. The approach introduces Animesis, a Constitutional Memory Architecture designed for persistent digital citizens whose identities must survive across model transitions and extended lifecycles.
AINeutralarXiv – CS AI · Apr 76/10
🧠Researchers propose Rashomon Memory, a new AI agent memory architecture where multiple goal-conditioned agents maintain parallel interpretations of the same events and negotiate through argumentation at query time. The system allows AI agents to handle conflicting perspectives on experiences rather than forcing a single interpretation, using Dung's argumentation semantics to determine which proposals survive retrieval.
AIBullisharXiv – CS AI · Mar 37/109
🧠Researchers have developed MM-Mem, a new pyramidal multimodal memory architecture that enables AI systems to better understand long-horizon videos by mimicking human cognitive memory processes. The system addresses current limitations in multimodal large language models by creating a hierarchical memory structure that progressively distills detailed visual information into high-level semantic understanding.