41 articles tagged with #continual-learning. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.
AIBullisharXiv โ CS AI ยท Mar 36/1011
๐ง Researchers developed FreeGNN, a continual source-free graph neural network framework for renewable energy forecasting that adapts to new sites without requiring source data or target labels. The system uses a teacher-student strategy with memory replay and achieved strong performance across three real-world datasets including GEFCom2012, Solar PV, and Wind SCADA.
AINeutralarXiv โ CS AI ยท Mar 37/108
๐ง Researchers propose Streaming Continual Learning (SCL) as a unified paradigm that combines Continual Learning and Streaming Machine Learning approaches. SCL aims to enable AI systems to both rapidly adapt to new information and retain previously learned knowledge, addressing limitations of existing methods that excel at only one aspect.
AIBullisharXiv โ CS AI ยท Mar 37/104
๐ง Researchers propose combining In-Weight Learning (IWL) and In-Context Learning (ICL) through modular memory architectures to solve continual learning challenges in AI. The framework aims to enable AI agents to continuously adapt and accumulate knowledge without catastrophic forgetting, addressing key limitations of current foundation models.
AIBullisharXiv โ CS AI ยท Mar 36/103
๐ง Researchers introduce Fly-CL, a bio-inspired framework for continual representation learning that significantly reduces training time while maintaining performance comparable to state-of-the-art methods. The approach, inspired by fly olfactory circuits, addresses multicollinearity issues in pre-trained models and enables more efficient similarity matching for real-time applications.
AINeutralarXiv โ CS AI ยท Mar 35/103
๐ง Researchers propose FIRE, a new reinitialization method for deep neural networks that balances stability and plasticity when learning from nonstationary data. The method uses mathematical optimization to maintain prior knowledge while adapting to new tasks, showing superior performance across visual learning, language modeling, and reinforcement learning domains.
AIBullisharXiv โ CS AI ยท Mar 26/1012
๐ง Researchers developed Hybrid Class-Aware Selective Replay (Hybrid-CASR), a continual learning method that improves AI-based software vulnerability detection by addressing catastrophic forgetting in temporal scenarios. The method achieved 0.667 Macro-F1 score while reducing training time by 17% compared to baseline approaches on CVE data from 2018-2024.
AIBullisharXiv โ CS AI ยท Mar 27/1016
๐ง Researchers from arXiv demonstrate that activation function design is crucial for maintaining neural network plasticity in continual learning scenarios. They introduce two new activation functions (Smooth-Leaky and Randomized Smooth-Leaky) that help prevent models from losing their ability to adapt to new tasks over time.
$LINK
AIBullisharXiv โ CS AI ยท Feb 276/107
๐ง Researchers introduce NTK-CL, a new framework for parameter-efficient fine-tuning in continual learning that uses Neural Tangent Kernel theory to address catastrophic forgetting. The approach achieves state-of-the-art performance by tripling feature representation and implementing adaptive mechanisms to maintain task-specific knowledge while learning new tasks.
AINeutralarXiv โ CS AI ยท Mar 174/10
๐ง Researchers have developed SyMPLER, an explainable AI model for time series forecasting that uses dynamic piecewise-linear approximations to handle nonstationary environments. The model automatically determines when to add new local models based on prediction errors using Statistical Learning Theory, achieving comparable performance to black-box models while maintaining interpretability.
AINeutralarXiv โ CS AI ยท Mar 164/10
๐ง Researchers propose Residual SODAP, a new continual learning framework that addresses catastrophic forgetting in AI models when adapting to new domains without access to previous data. The method combines prompt-based adaptation with classifier knowledge preservation, achieving state-of-the-art results on three benchmarks.
AINeutralarXiv โ CS AI ยท Mar 164/10
๐ง Researchers propose a new continual learning approach called Prompt-Prototype (ProP) that eliminates key-value pairing dependencies in AI models. The method uses task-specific prompts and prototypes to reduce inter-task interference while maintaining scalability and stability through regularization constraints.
AINeutralarXiv โ CS AI ยท Mar 54/10
๐ง Researchers propose an Adaptive and Selective Reset (ASR) scheme to address model collapse in long-term test-time adaptation, where AI models gradually degrade and predict only a few classes. The solution dynamically determines when and where to reset models while preserving beneficial knowledge through importance-aware regularization.
AIBullisharXiv โ CS AI ยท Mar 35/105
๐ง Researchers propose Streaming Continual Learning (SCL), a unified framework that combines Continual Learning and Streaming Machine Learning to enable AI systems to adapt to dynamic data streams while retaining previous knowledge. This approach aims to advance intelligent systems by bridging two previously separate research communities.
AINeutralGoogle Research Blog ยท Nov 74/105
๐ง A new machine learning paradigm called Nested Learning has been introduced for continual learning applications. This represents a theoretical advancement in AI algorithms that could improve how AI systems learn and adapt over time without forgetting previous knowledge.
AINeutralarXiv โ CS AI ยท Mar 34/106
๐ง Researchers developed a framework to address catastrophic forgetting in IoT intrusion detection systems using continual learning approaches. The study benchmarked five methods across 48 attack domains, finding that replay-based approaches performed best overall while Synaptic Intelligence achieved near-zero forgetting with high efficiency.
$NEAR
AINeutralarXiv โ CS AI ยท Mar 24/106
๐ง Researchers propose SegReg, a latent-space regularization framework for medical image segmentation that improves model generalization and continual learning capabilities. The method operates on U-Net feature maps and demonstrates consistent improvements across prostate, cardiac, and hippocampus segmentation tasks without adding extra parameters.