11 articles tagged with #memory. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.
AIBullisharXiv โ CS AI ยท Apr 77/10
๐ง Research published on arXiv demonstrates that large language models playing poker can develop sophisticated Theory of Mind capabilities when equipped with persistent memory, progressing to advanced levels of opponent modeling and strategic deception. The study found memory is necessary and sufficient for this emergent behavior, while domain expertise enhances but doesn't gate ToM development.
๐ง GPT-4
AIBullishBlockonomi ยท Mar 117/10
๐ง Wolfe Research has raised Micron's price target by 43% to $500, citing expected AI-driven memory demand that could drive 100% year-over-year DRAM price growth in 2026. The company's Q2 earnings are scheduled for March 18.
AIBullisharXiv โ CS AI ยท Mar 56/10
๐ง Researchers introduce MIKASA, a comprehensive benchmark suite designed to evaluate memory capabilities in reinforcement learning agents, particularly for robotic manipulation tasks. The framework includes MIKASA-Base for general memory RL evaluation and MIKASA-Robo with 32 specialized tasks for tabletop robotic manipulation scenarios.
AINeutralarXiv โ CS AI ยท Mar 46/104
๐ง Researchers analyzed memory systems in LLM agents and found that retrieval methods are more critical than write strategies for performance. Simple raw chunk storage matched expensive alternatives, suggesting current memory pipelines may discard useful context that retrieval systems cannot compensate for.
AINeutralarXiv โ CS AI ยท Mar 176/10
๐ง Research reveals that Large Language Models struggle with dynamic Theory of Mind tasks, particularly tracking how others' beliefs change over time. While LLMs can infer current beliefs effectively, they fail to maintain and retrieve prior belief states after updates occur, showing patterns consistent with human cognitive biases.
AIBullishBlockonomi ยท Mar 166/10
๐ง Wells Fargo analyst Aaron Rakers maintains Strong Buy ratings on semiconductor companies Marvell (MRVL), Micron (MU), and Rambus (RMBS) while raising price targets. The bullish outlook is driven by surging AI-related memory demand across the semiconductor sector.
AINeutralarXiv โ CS AI ยท Mar 36/108
๐ง Research analyzing 39 large language models reveals they exhibit proactive interference (remembering early information over recent) unlike humans who typically show retroactive interference. The study found this pattern is universal across all tested LLMs, with larger models showing better resistance to retroactive interference but unchanged proactive interference patterns.
AIBullishGoogle Research Blog ยท Dec 46/107
๐ง The article discusses Titans + MIRAS technology designed to provide AI systems with long-term memory capabilities. This development aims to address current limitations in AI memory retention and could enhance AI performance across various applications.
AIBullishLil'Log (Lilian Weng) ยท Jun 236/10
๐ง The article explores LLM-powered autonomous agents that use large language models as core controllers, going beyond text generation to serve as general problem solvers. Key systems like AutoGPT, GPT-Engineer, and BabyAGI demonstrate the potential of agents with planning, memory, and tool-use capabilities.
AINeutralarXiv โ CS AI ยท Mar 54/10
๐ง Researchers propose a standardized framework for classifying and evaluating memory capabilities in reinforcement learning agents, drawing from cognitive science concepts. The paper addresses confusion around memory terminology in RL and provides practical definitions for different memory types along with robust experimental methodologies.
AINeutralarXiv โ CS AI ยท Mar 34/107
๐ง Researchers introduced RMBench, a simulation benchmark for evaluating memory-dependent robotic manipulation tasks, addressing gaps in existing policies that struggle with historical reasoning. The study includes 9 manipulation tasks and proposes Mem-0, a modular policy designed to provide insights into how architectural choices affect memory performance in robotic systems.