y0news
#explainable-ai5 articles
5 articles
AIBullisharXiv โ€“ CS AI ยท 4h ago3
๐Ÿง 

An Efficient Unsupervised Federated Learning Approach for Anomaly Detection in Heterogeneous IoT Networks

Researchers propose an efficient unsupervised federated learning framework for anomaly detection in heterogeneous IoT networks that preserves privacy while leveraging shared features from multiple datasets. The approach uses explainable AI techniques like SHAP for transparency and demonstrates superior performance compared to conventional federated learning methods on real-world IoT datasets.

AINeutralarXiv โ€“ CS AI ยท 4h ago0
๐Ÿง 

Hierarchical Concept-based Interpretable Models

Researchers introduce Hierarchical Concept Embedding Models (HiCEMs), a new approach to make deep neural networks more interpretable by modeling relationships between concepts in hierarchical structures. The method includes Concept Splitting to automatically discover fine-grained sub-concepts without additional annotations, reducing the burden of manual labeling while improving model accuracy and interpretability.

AIBullisharXiv โ€“ CS AI ยท 4h ago0
๐Ÿง 

Joint Distribution-Informed Shapley Values for Sparse Counterfactual Explanations

Researchers introduce COLA, a framework that refines counterfactual explanations in AI models by using optimal transport theory and Shapley values to achieve the same prediction changes with 26-45% fewer feature modifications. The method works across different datasets and models to create more actionable and clearer AI explanations.

$NEAR
AINeutralarXiv โ€“ CS AI ยท 4h ago0
๐Ÿง 

Rough Sets for Explainability of Spectral Graph Clustering

Researchers propose an enhanced methodology using rough set theory to improve explainability of Graph Spectral Clustering (GSC) algorithms. The approach addresses challenges in explaining clustering results, particularly when applied to text documents where spectral space embeddings lack clear relation to content.