y0news
← Feed
Back to feed
🧠 AI Neutral

Hierarchical Concept-based Interpretable Models

arXiv – CS AI|Oscar Hill, Mateo Espinosa Zarlenga, Mateja Jamnik||1 views
🤖AI Summary

Researchers introduce Hierarchical Concept Embedding Models (HiCEMs), a new approach to make deep neural networks more interpretable by modeling relationships between concepts in hierarchical structures. The method includes Concept Splitting to automatically discover fine-grained sub-concepts without additional annotations, reducing the burden of manual labeling while improving model accuracy and interpretability.

Key Takeaways
  • HiCEMs address limitations of existing Concept Embedding Models by explicitly modeling inter-concept relationships through hierarchical structures.
  • Concept Splitting method automatically discovers interpretable sub-concepts from pretrained models without requiring additional manual annotations.
  • The approach enables fine-grained explanations from limited concept labels, reducing annotation costs for training interpretable models.
  • Testing on multiple datasets including a new PseudoKitchens dataset shows HiCEMs can discover human-interpretable concepts absent during training.
  • HiCEMs support test-time concept interventions at different granularities, leading to improved task accuracy.
Read Original →via arXiv – CS AI
Act on this with AI
Stay ahead of the market.
Connect your wallet to an AI agent. It reads balances, proposes swaps and bridges across 15 chains — you keep full control of your keys.
Connect Wallet to AI →How it works
Related Articles