26 articles tagged with #catastrophic-forgetting. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.
AIBullisharXiv โ CS AI ยท 3d ago7/10
๐ง Researchers introduce soul.py, an open-source architecture addressing catastrophic forgetting in AI agents by distributing identity across multiple memory systems rather than centralizing it. The framework implements persistent identity through separable components and a hybrid RAG+RLM retrieval system, drawing inspiration from how human memory survives neurological damage.
AIBullisharXiv โ CS AI ยท 3d ago7/10
๐ง Researchers propose VaCoAl, a hyperdimensional computing architecture that combines sparse distributed memory with Galois-field algebra to address limitations in modern AI systems like catastrophic forgetting and the binding problem. The deterministic system demonstrates emergent properties equivalent to spike-timing-dependent plasticity and achieves multi-hop reasoning across 25.5M paths in knowledge graphs, positioning it as a complementary third paradigm to large language models.
AINeutralarXiv โ CS AI ยท Apr 107/10
๐ง Researchers introduce the Informational Buildup Framework (IBF), a new approach to continual learning that eliminates catastrophic forgetting by treating information as structural alignment rather than stored parameters. The framework demonstrates superior performance across multiple domains including chess and image classification, achieving near-zero forgetting without requiring raw data replay.
AIBullisharXiv โ CS AI ยท Mar 177/10
๐ง Researchers introduce SCAN, a new framework for editing Large Language Models that prevents catastrophic forgetting during sequential knowledge updates. The method uses sparse circuit manipulation instead of dense parameter changes, maintaining model performance even after 3,000 sequential edits across major models like Gemma2, Qwen3, and Llama3.1.
๐ง Llama
AIBullisharXiv โ CS AI ยท Mar 56/10
๐ง Researchers discovered that pretrained Vision-Language-Action (VLA) models demonstrate remarkable resistance to catastrophic forgetting in continual learning scenarios, unlike smaller models trained from scratch. Simple Experience Replay techniques achieve near-zero forgetting with minimal replay data, suggesting large-scale pretraining fundamentally changes continual learning dynamics for robotics applications.
AIBullisharXiv โ CS AI ยท Mar 46/103
๐ง Researchers developed cPNN (Continuous Progressive Neural Networks), a new AI architecture that handles evolving data streams with temporal dependencies while avoiding catastrophic forgetting. The system addresses concept drift in time series data by combining recurrent neural networks with progressive learning techniques, showing quick adaptation to new concepts.
AIBullisharXiv โ CS AI ยท Mar 47/103
๐ง Researchers have identified a critical flaw in reinforcement learning fine-tuning of large language models that causes degradation in multi-attempt performance despite improvements in single attempts. Their proposed solution, Diversity-Preserving Hybrid RL (DPH-RL), uses mass-covering f-divergences to maintain model diversity and prevent catastrophic forgetting while improving training efficiency.
AIBullisharXiv โ CS AI ยท Mar 37/103
๐ง Researchers introduce Dream2Learn (D2L), a continual learning framework that enables AI models to generate synthetic training data from their own internal representations, mimicking human dreaming for knowledge consolidation. The system creates novel 'dreamed classes' using diffusion models to improve forward knowledge transfer and prevent catastrophic forgetting in neural networks.
AIBullisharXiv โ CS AI ยท Feb 277/106
๐ง Researchers introduce GraftLLM, a new method for transferring knowledge between large language models using 'SkillPack' format that preserves capabilities while avoiding catastrophic forgetting. The approach enables efficient model fusion and continual learning for heterogeneous models through modular knowledge storage.
AIBullisharXiv โ CS AI ยท 2d ago6/10
๐ง Researchers propose Joint Flashback Adaptation, a novel method to address catastrophic forgetting in large language models during incremental task learning. The approach uses limited prompts from previous tasks combined with latent task interpolation, demonstrating improved performance across 1000+ instruction-following and reasoning tasks without requiring full replay data.
AINeutralarXiv โ CS AI ยท 3d ago6/10
๐ง Researchers introduce LIFESTATE-BENCH, a benchmark for evaluating lifelong learning capabilities in large language models through multi-turn interactions using narrative datasets like Hamlet. Testing shows nonparametric approaches significantly outperform parametric methods, but all models struggle with catastrophic forgetting over extended interactions, revealing fundamental limitations in LLM memory and consistency.
๐ง GPT-4๐ง Llama
AIBullisharXiv โ CS AI ยท 3d ago6/10
๐ง Researchers present Data Mixing Agent, an AI framework that uses reinforcement learning to automatically optimize how large language models balance training data from source and target domains during continual pre-training. The approach outperforms manual reweighting strategies while generalizing across different models, domains, and fields without requiring retraining.
AIBullisharXiv โ CS AI ยท Mar 176/10
๐ง Researchers propose DeLL, a new framework for autonomous driving systems that addresses lifelong learning challenges through dynamic knowledge spaces and causal inference mechanisms. The system uses Dirichlet process mixture models to prevent catastrophic forgetting and improve adaptability to new driving scenarios while maintaining previously learned knowledge.
AIBullisharXiv โ CS AI ยท Mar 176/10
๐ง Researchers introduce CATFormer, a new spiking neural network architecture that solves catastrophic forgetting in continual learning through dynamic threshold neurons. The framework uses context-adaptive thresholds and task-agnostic inference to maintain knowledge across multiple learning tasks without performance degradation.
AINeutralarXiv โ CS AI ยท Mar 166/10
๐ง This comprehensive survey examines continual learning methodologies for large language models, focusing on three core training stages and methods to mitigate catastrophic forgetting. The research reveals that while current approaches show promise in specific domains, fundamental challenges remain in achieving seamless knowledge integration across diverse tasks and temporal scales.
AIBullisharXiv โ CS AI ยท Mar 166/10
๐ง Researchers developed UNIFIER, a continual learning framework for multimodal large language models (MLLMs) to adapt to changing visual scenarios without catastrophic forgetting. The framework addresses visual discrepancies across different environments like high-altitude, underwater, low-altitude, and indoor scenarios, showing significant improvements over existing methods.
๐ข Hugging Face
AIBullisharXiv โ CS AI ยท Mar 126/10
๐ง Researchers developed a new continual learning framework for human activity recognition (HAR) in IoT wearable devices that prevents AI models from forgetting previous tasks when learning new ones. The method uses gated adaptation to achieve 77.7% accuracy while reducing forgetting from 39.7% to 16.2%, training only 2% of parameters.
AIBullisharXiv โ CS AI ยท Mar 116/10
๐ง Researchers propose MSSR (Memory-Inspired Sampler and Scheduler Replay), a new framework for continual fine-tuning of large language models that mitigates catastrophic forgetting while maintaining adaptability. The method estimates sample-level memory strength and schedules rehearsal at adaptive intervals, showing superior performance across three backbone models and 11 sequential tasks compared to existing replay-based strategies.
AIBullisharXiv โ CS AI ยท Mar 36/108
๐ง Researchers propose IDER (Idempotent Experience Replay), a new continual learning method that addresses catastrophic forgetting in neural networks while improving prediction reliability. The approach uses idempotent properties to help AI models retain previously learned knowledge when acquiring new tasks, with demonstrated improvements in accuracy and reduced computational overhead.
AIBullisharXiv โ CS AI ยท Mar 36/109
๐ง Researchers introduce Surgical Post-Training (SPoT), a new method to improve Large Language Model reasoning while preventing catastrophic forgetting. SPoT achieved 6.2% accuracy improvement on Qwen3-8B using only 4k data pairs and 28 minutes of training, offering a more efficient alternative to traditional post-training approaches.
AIBullisharXiv โ CS AI ยท Mar 37/104
๐ง Researchers propose combining In-Weight Learning (IWL) and In-Context Learning (ICL) through modular memory architectures to solve continual learning challenges in AI. The framework aims to enable AI agents to continuously adapt and accumulate knowledge without catastrophic forgetting, addressing key limitations of current foundation models.
AIBullisharXiv โ CS AI ยท Feb 276/107
๐ง Researchers introduce NTK-CL, a new framework for parameter-efficient fine-tuning in continual learning that uses Neural Tangent Kernel theory to address catastrophic forgetting. The approach achieves state-of-the-art performance by tripling feature representation and implementing adaptive mechanisms to maintain task-specific knowledge while learning new tasks.
AINeutralarXiv โ CS AI ยท Mar 164/10
๐ง Researchers propose Residual SODAP, a new continual learning framework that addresses catastrophic forgetting in AI models when adapting to new domains without access to previous data. The method combines prompt-based adaptation with classifier knowledge preservation, achieving state-of-the-art results on three benchmarks.
AINeutralarXiv โ CS AI ยท Mar 44/102
๐ง Researchers at arXiv have identified temporal imbalance as a key factor causing catastrophic forgetting in Class-Incremental Learning (CIL) systems. They propose Temporal-Adjusted Loss (TAL), a new method that uses temporal decay kernels to reweight negative supervision, demonstrating significant improvements in reducing forgetting across multiple CIL benchmarks.
AINeutralGoogle Research Blog ยท Nov 74/105
๐ง A new machine learning paradigm called Nested Learning has been introduced for continual learning applications. This represents a theoretical advancement in AI algorithms that could improve how AI systems learn and adapt over time without forgetting previous knowledge.