y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto

#continual-learning News & Analysis

41 articles tagged with #continual-learning. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.

41 articles
AIBullisharXiv โ€“ CS AI ยท Mar 36/1011
๐Ÿง 

FreeGNN: Continual Source-Free Graph Neural Network Adaptation for Renewable Energy Forecasting

Researchers developed FreeGNN, a continual source-free graph neural network framework for renewable energy forecasting that adapts to new sites without requiring source data or target labels. The system uses a teacher-student strategy with memory replay and achieved strong performance across three real-world datasets including GEFCom2012, Solar PV, and Wind SCADA.

AINeutralarXiv โ€“ CS AI ยท Mar 37/108
๐Ÿง 

A Practical Guide to Streaming Continual Learning

Researchers propose Streaming Continual Learning (SCL) as a unified paradigm that combines Continual Learning and Streaming Machine Learning approaches. SCL aims to enable AI systems to both rapidly adapt to new information and retain previously learned knowledge, addressing limitations of existing methods that excel at only one aspect.

AIBullisharXiv โ€“ CS AI ยท Mar 37/104
๐Ÿง 

Modular Memory is the Key to Continual Learning Agents

Researchers propose combining In-Weight Learning (IWL) and In-Context Learning (ICL) through modular memory architectures to solve continual learning challenges in AI. The framework aims to enable AI agents to continuously adapt and accumulate knowledge without catastrophic forgetting, addressing key limitations of current foundation models.

AIBullisharXiv โ€“ CS AI ยท Mar 36/103
๐Ÿง 

Fly-CL: A Fly-Inspired Framework for Enhancing Efficient Decorrelation and Reduced Training Time in Pre-trained Model-based Continual Representation Learning

Researchers introduce Fly-CL, a bio-inspired framework for continual representation learning that significantly reduces training time while maintaining performance comparable to state-of-the-art methods. The approach, inspired by fly olfactory circuits, addresses multicollinearity issues in pre-trained models and enables more efficient similarity matching for real-time applications.

AINeutralarXiv โ€“ CS AI ยท Mar 35/103
๐Ÿง 

FIRE: Frobenius-Isometry Reinitialization for Balancing the Stability-Plasticity Tradeoff

Researchers propose FIRE, a new reinitialization method for deep neural networks that balances stability and plasticity when learning from nonstationary data. The method uses mathematical optimization to maintain prior knowledge while adapting to new tasks, showing superior performance across visual learning, language modeling, and reinforcement learning domains.

AIBullisharXiv โ€“ CS AI ยท Mar 26/1012
๐Ÿง 

Enhancing Continual Learning for Software Vulnerability Prediction: Addressing Catastrophic Forgetting via Hybrid-Confidence-Aware Selective Replay for Temporal LLM Fine-Tuning

Researchers developed Hybrid Class-Aware Selective Replay (Hybrid-CASR), a continual learning method that improves AI-based software vulnerability detection by addressing catastrophic forgetting in temporal scenarios. The method achieved 0.667 Macro-F1 score while reducing training time by 17% compared to baseline approaches on CVE data from 2018-2024.

AIBullisharXiv โ€“ CS AI ยท Mar 27/1016
๐Ÿง 

Activation Function Design Sustains Plasticity in Continual Learning

Researchers from arXiv demonstrate that activation function design is crucial for maintaining neural network plasticity in continual learning scenarios. They introduce two new activation functions (Smooth-Leaky and Randomized Smooth-Leaky) that help prevent models from losing their ability to adapt to new tasks over time.

$LINK
AIBullisharXiv โ€“ CS AI ยท Feb 276/107
๐Ÿง 

Parameter-Efficient Fine-Tuning for Continual Learning: A Neural Tangent Kernel Perspective

Researchers introduce NTK-CL, a new framework for parameter-efficient fine-tuning in continual learning that uses Neural Tangent Kernel theory to address catastrophic forgetting. The approach achieves state-of-the-art performance by tripling feature representation and implementing adaptive mechanisms to maintain task-specific knowledge while learning new tasks.

AINeutralarXiv โ€“ CS AI ยท Mar 174/10
๐Ÿง 

Locally Linear Continual Learning for Time Series based on VC-Theoretical Generalization Bounds

Researchers have developed SyMPLER, an explainable AI model for time series forecasting that uses dynamic piecewise-linear approximations to handle nonstationary environments. The model automatically determines when to add new local models based on prediction errors using Statistical Learning Theory, achieving comparable performance to black-box models while maintaining interpretability.

AINeutralarXiv โ€“ CS AI ยท Mar 164/10
๐Ÿง 

Residual SODAP: Residual Self-Organizing Domain-Adaptive Prompting with Structural Knowledge Preservation for Continual Learning

Researchers propose Residual SODAP, a new continual learning framework that addresses catastrophic forgetting in AI models when adapting to new domains without access to previous data. The method combines prompt-based adaptation with classifier knowledge preservation, achieving state-of-the-art results on three benchmarks.

AINeutralarXiv โ€“ CS AI ยท Mar 164/10
๐Ÿง 

Key-Value Pair-Free Continual Learner via Task-Specific Prompt-Prototype

Researchers propose a new continual learning approach called Prompt-Prototype (ProP) that eliminates key-value pairing dependencies in AI models. The method uses task-specific prompts and prototypes to reduce inter-task interference while maintaining scalability and stability through regularization constraints.

AINeutralarXiv โ€“ CS AI ยท Mar 54/10
๐Ÿง 

When and Where to Reset Matters for Long-Term Test-Time Adaptation

Researchers propose an Adaptive and Selective Reset (ASR) scheme to address model collapse in long-term test-time adaptation, where AI models gradually degrade and predict only a few classes. The solution dynamically determines when and where to reset models while preserving beneficial knowledge through importance-aware regularization.

AIBullisharXiv โ€“ CS AI ยท Mar 35/105
๐Ÿง 

Streaming Continual Learning for Unified Adaptive Intelligence in Dynamic Environments

Researchers propose Streaming Continual Learning (SCL), a unified framework that combines Continual Learning and Streaming Machine Learning to enable AI systems to adapt to dynamic data streams while retaining previous knowledge. This approach aims to advance intelligent systems by bridging two previously separate research communities.

AINeutralGoogle Research Blog ยท Nov 74/105
๐Ÿง 

Introducing Nested Learning: A new ML paradigm for continual learning

A new machine learning paradigm called Nested Learning has been introduced for continual learning applications. This represents a theoretical advancement in AI algorithms that could improve how AI systems learn and adapt over time without forgetting previous knowledge.

AINeutralarXiv โ€“ CS AI ยท Mar 34/106
๐Ÿง 

Quantifying Catastrophic Forgetting in IoT Intrusion Detection Systems

Researchers developed a framework to address catastrophic forgetting in IoT intrusion detection systems using continual learning approaches. The study benchmarked five methods across 48 attack domains, finding that replay-based approaches performed best overall while Synaptic Intelligence achieved near-zero forgetting with high efficiency.

$NEAR
AINeutralarXiv โ€“ CS AI ยท Mar 24/106
๐Ÿง 

SegReg: Latent Space Regularization for Improved Medical Image Segmentation

Researchers propose SegReg, a latent-space regularization framework for medical image segmentation that improves model generalization and continual learning capabilities. The method operates on U-Net feature maps and demonstrates consistent improvements across prostate, cardiac, and hippocampus segmentation tasks without adding extra parameters.

โ† PrevPage 2 of 2