←Back to feed
🧠 AI⚪ Neutral
Directional Neural Collapse Explains Few-Shot Transfer in Self-Supervised Learning
🤖AI Summary
Researchers propose directional CDNV (decision-axis variance) as a key geometric quantity explaining why self-supervised learning representations transfer well with few labels. The study shows that small variability along class-separating directions enables strong few-shot transfer and low interference across multiple tasks.
Key Takeaways
- →Directional CDNV is identified as the core factor behind successful few-shot transfer in self-supervised learning.
- →Researchers proved sharp generalization bounds for downstream classification with directional CDNV as the leading term.
- →Small directional CDNV forces decision axes to be nearly orthogonal, enabling one representation to support many tasks.
- →Empirical results show directional CDNV collapses during pretraining even when classical CDNV remains large.
- →The findings provide theoretical understanding of why frozen SSL representations work well across semantic tasks.
#self-supervised-learning#neural-collapse#few-shot-learning#machine-learning#transfer-learning#representation-learning#geometric-analysis#multitask-learning
Read Original →via arXiv – CS AI
Act on this with AI
Stay ahead of the market.
Connect your wallet to an AI agent. It reads balances, proposes swaps and bridges across 15 chains — you keep full control of your keys.
Related Articles