y0news
#model-interpretability2 articles
2 articles
AINeutralarXiv โ€“ CS AI ยท 4h ago0
๐Ÿง 

Hierarchical Concept-based Interpretable Models

Researchers introduce Hierarchical Concept Embedding Models (HiCEMs), a new approach to make deep neural networks more interpretable by modeling relationships between concepts in hierarchical structures. The method includes Concept Splitting to automatically discover fine-grained sub-concepts without additional annotations, reducing the burden of manual labeling while improving model accuracy and interpretability.

AIBullisharXiv โ€“ CS AI ยท 4h ago0
๐Ÿง 

Joint Distribution-Informed Shapley Values for Sparse Counterfactual Explanations

Researchers introduce COLA, a framework that refines counterfactual explanations in AI models by using optimal transport theory and Shapley values to achieve the same prediction changes with 26-45% fewer feature modifications. The method works across different datasets and models to create more actionable and clearer AI explanations.

$NEAR