UFO: A Unified Flow-Oriented Framework for Robust Continual Graph Learning
Researchers introduce UFO, a framework addressing robust continual graph learning by simultaneously tackling catastrophic forgetting and noisy data supervision in evolving graphs. The method uses flow-based generative modeling to mitigate forgetting and instance-level reliability scoring to handle corrupted labels, demonstrating superior performance across benchmark datasets.
The paper addresses a critical gap in graph learning research by tackling the intersection of two previously siloed problems: catastrophic forgetting in continual learning and label noise robustness. While continual graph learning methods have matured to handle task sequences, they assume clean annotations—an unrealistic premise in real-world graph data where new nodes and edges often arrive with annotation errors or adversarial corruption. The authors identify catastrophic remembering as a novel failure mode where models reinforce corrupted knowledge across sequential tasks, compounding the damage of noisy supervision.
Graph learning has become increasingly important as applications like social networks, recommendation systems, and knowledge graphs operate in dynamic environments. The shift toward continual learning reflects real deployment scenarios where data continuously evolves rather than remaining static. However, practical implementations reveal that newly arriving graph portions frequently contain labeling errors due to crowdsourcing limitations, automated annotation failures, or adversarial attacks—problems largely ignored by existing CGL literature.
UFO's technical approach leverages flow-based generative models to learn and replay conditional feature distributions without storing historical data, directly addressing the memory constraints of continual learning. Simultaneously, the framework estimates per-node reliability scores to downweight corrupted examples during training. This dual mechanism provides a more robust learning pipeline. The experimental validation across four benchmark datasets with varying noise ratios demonstrates consistent improvements in both accuracy and forgetting metrics, suggesting practical applicability.
The work carries implications for industries deploying graph neural networks in semi-supervised settings where annotation quality varies. Future research should explore integration with active learning strategies to further reduce reliance on noisy labels and examine scalability to trillion-scale graphs.
- →UFO framework simultaneously addresses catastrophic forgetting and label noise in continual graph learning through flow-based modeling and reliability scoring.
- →The paper identifies catastrophic remembering as a failure mode where corrupted knowledge reinforces across sequential graph learning tasks.
- →Flow-based generative modeling enables replay representation without storing historical graph data, reducing memory overhead.
- →Instance-level reliability scoring distinguishes clean from noisy nodes, mitigating the impact of annotation errors and adversarial corruption.
- →Experimental results on four benchmarks show consistent improvements in accuracy and forgetting metrics across varying noise ratios.