y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto

#explainable-ai News & Analysis

75 articles tagged with #explainable-ai. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.

75 articles
AIBearisharXiv โ€“ CS AI ยท Mar 36/103
๐Ÿง 

GNN Explanations that do not Explain and How to find Them

Researchers have identified critical failures in Self-explainable Graph Neural Networks (SE-GNNs) where explanations can be completely unrelated to how the models actually make predictions. The study reveals that these degenerate explanations can hide the use of sensitive attributes and can emerge both maliciously and naturally, while existing faithfulness metrics fail to detect them.

AIBullisharXiv โ€“ CS AI ยท Mar 26/1014
๐Ÿง 

An Efficient Unsupervised Federated Learning Approach for Anomaly Detection in Heterogeneous IoT Networks

Researchers propose an efficient unsupervised federated learning framework for anomaly detection in heterogeneous IoT networks that preserves privacy while leveraging shared features from multiple datasets. The approach uses explainable AI techniques like SHAP for transparency and demonstrates superior performance compared to conventional federated learning methods on real-world IoT datasets.

AIBullisharXiv โ€“ CS AI ยท Feb 276/105
๐Ÿง 

A Lightweight IDS for Early APT Detection Using a Novel Feature Selection Method

Researchers developed a lightweight intrusion detection system using XGBoost and explainable AI to detect Advanced Persistent Threats (APTs) at early stages. The system reduced required features from 77 to just 4 while maintaining 97% precision and 100% recall performance.

$APT
AINeutralarXiv โ€“ CS AI ยท Mar 264/10
๐Ÿง 

No Single Metric Tells the Whole Story: A Multi-Dimensional Evaluation Framework for Uncertainty Attributions

Researchers propose a new framework for evaluating uncertainty attribution methods in explainable AI, addressing inconsistent evaluation practices in the field. The study introduces five key properties including a new 'conveyance' metric and demonstrates that gradient-based methods outperform perturbation-based approaches across multiple evaluation criteria.

AINeutralarXiv โ€“ CS AI ยท Mar 174/10
๐Ÿง 

Locally Linear Continual Learning for Time Series based on VC-Theoretical Generalization Bounds

Researchers have developed SyMPLER, an explainable AI model for time series forecasting that uses dynamic piecewise-linear approximations to handle nonstationary environments. The model automatically determines when to add new local models based on prediction errors using Statistical Learning Theory, achieving comparable performance to black-box models while maintaining interpretability.

AINeutralarXiv โ€“ CS AI ยท Mar 174/10
๐Ÿง 

Informative Perturbation Selection for Uncertainty-Aware Post-hoc Explanations

Researchers introduce EAGLE, a new framework for explaining black-box machine learning models using information-theoretic active learning to select optimal data perturbations. The method produces feature importance scores with uncertainty estimates and demonstrates improved explanation reproducibility and stability compared to existing approaches like LIME.

AINeutralarXiv โ€“ CS AI ยท Mar 174/10
๐Ÿง 

Circuit Representations of Random Forests with Applications to XAI

Researchers developed a new method for converting random forest classifiers into circuit representations that enables more efficient computation of decision explanations. The approach provides tools for computing robustness metrics and identifying ways to alter classifier decisions, with applications in explainable AI (XAI).

AINeutralarXiv โ€“ CS AI ยท Mar 175/10
๐Ÿง 

Privacy-Preserving Explainable AIoT Application via SHAP Entropy Regularization

Researchers developed a privacy-preserving method using SHAP entropy regularization to protect sensitive user data in explainable AI systems for smart home IoT applications. The approach reduces privacy leakage while maintaining model accuracy and explanation quality.

AIBullisharXiv โ€“ CS AI ยท Mar 95/10
๐Ÿง 

CLAIRE: Compressed Latent Autoencoder for Industrial Representation and Evaluation -- A Deep Learning Framework for Smart Manufacturing

Researchers introduce CLAIRE, a deep learning framework that combines unsupervised autoencoders with supervised classification for fault detection in industrial manufacturing. The system transforms high-dimensional sensor data into compact representations and uses explainable AI techniques to identify key features contributing to fault predictions.

AINeutralarXiv โ€“ CS AI ยท Mar 54/10
๐Ÿง 

Circuit Insights: Towards Interpretability Beyond Activations

Researchers introduce WeightLens and CircuitLens, two new methods for analyzing neural network interpretability that go beyond traditional activation-based approaches. These tools aim to provide more systematic and scalable analysis of neural network circuits by interpreting features directly from weights and capturing feature interactions.

AINeutralarXiv โ€“ CS AI ยท Mar 44/103
๐Ÿง 

Neuro-Symbolic Artificial Intelligence: A Task-Directed Survey in the Black-Box Models Era

This academic survey examines Neuro-Symbolic AI methods that combine neural networks with symbolic computing to enhance explainability and reasoning capabilities. The research explores how these hybrid approaches can address limitations in semantic generalizability and compete with pure connectionist systems in real-world applications.

AINeutralarXiv โ€“ CS AI ยท Mar 44/102
๐Ÿง 

Diffusion-EXR: Controllable Review Generation for Explainable Recommendation via Diffusion Models

Researchers propose Diffusion-EXR, a new AI model that uses Denoising Diffusion Probabilistic Models (DDPM) to generate review text for explainable recommendation systems. The model corrupts review embeddings with Gaussian noise and learns to reconstruct them, achieving state-of-the-art performance on benchmark datasets for recommendation review generation.

AIBullisharXiv โ€“ CS AI ยท Mar 35/105
๐Ÿง 

Designing Explainable AI for Healthcare Reviews: Guidance on Adoption and Trust

Researchers conducted a mixed-methods study evaluating an explainable AI system for analyzing healthcare reviews, surveying 60 participants and conducting expert interviews. The study found strong demand for AI transparency in healthcare decision-making, with 82% of respondents saying they want to understand AI classification reasoning and 84% considering explainability important for trust.

$OP
AINeutralarXiv โ€“ CS AI ยท Mar 34/104
๐Ÿง 

Wasserstein Distances Made Explainable: Insights Into Dataset Shifts and Transport Phenomena

Researchers have developed a new Explainable AI method that makes Wasserstein distances more interpretable by attributing distance calculations to specific data components like subgroups and features. The framework enables better analysis of dataset shifts and transport phenomena across diverse applications with high accuracy.

AINeutralarXiv โ€“ CS AI ยท Mar 25/108
๐Ÿง 

Hierarchical Concept-based Interpretable Models

Researchers introduce Hierarchical Concept Embedding Models (HiCEMs), a new approach to make deep neural networks more interpretable by modeling relationships between concepts in hierarchical structures. The method includes Concept Splitting to automatically discover fine-grained sub-concepts without additional annotations, reducing the burden of manual labeling while improving model accuracy and interpretability.

AINeutralarXiv โ€“ CS AI ยท Feb 274/108
๐Ÿง 

Explainability-Aware Evaluation of Transfer Learning Models for IoT DDoS Detection Under Resource Constraints

Researchers evaluated seven pre-trained CNN architectures for IoT DDoS attack detection, finding that DenseNet and MobileNet models provide the best balance of accuracy, reliability, and interpretability under resource constraints. The study emphasizes the importance of combining performance metrics with explainability when deploying AI security models in IoT environments.

AINeutralarXiv โ€“ CS AI ยท Feb 274/106
๐Ÿง 

MEDNA-DFM: A Dual-View FiLM-MoE Model for Explainable DNA Methylation Prediction

Researchers developed MEDNA-DFM, a dual-view deep learning model that predicts DNA methylation patterns while providing biological explanations. The model achieves high accuracy across species and includes explainable AI features that reveal conserved genetic motifs and cooperative sequence-structure relationships.

AINeutralLil'Log (Lilian Weng) ยท Aug 15/10
๐Ÿง 

How to Explain the Prediction of a Machine Learning Model?

Machine learning models are increasingly being deployed in critical sectors including healthcare, justice systems, and financial services. This necessitates the development of model interpretability methods to understand how AI systems make decisions and ensure compliance with ethical and legal requirements.

AINeutralarXiv โ€“ CS AI ยท Mar 34/104
๐Ÿง 

Why Not? Solver-Grounded Certificates for Explainable Mission Planning

Researchers developed a new method for explaining satellite mission planning decisions using solver-grounded certificates that directly derive explanations from optimization models. The approach achieves perfect accuracy in explaining why scheduling requests are accepted or rejected, outperforming traditional post-hoc explanation methods that produce non-causal attributions 29% of the time.

AIBullisharXiv โ€“ CS AI ยท Mar 34/105
๐Ÿง 

Extended Empirical Validation of the Explainability Solution Space

Researchers published an extended validation study of the Explainability Solution Space (ESS) framework, demonstrating its effectiveness across different domains including urban resource allocation systems. The study confirms ESS can systematically adapt to various governance roles and stakeholder configurations, positioning it as a generalizable tool for explainable AI strategy design.

AINeutralarXiv โ€“ CS AI ยท Mar 34/105
๐Ÿง 

Strength Change Explanations in Quantitative Argumentation

Researchers introduce strength change explanations for quantitative argumentation graphs to make AI inference systems more contestable and explainable. The method describes how to modify argument strengths to achieve desired outcomes and demonstrates applications through heuristic search on layered graphs.

AIBullisharXiv โ€“ CS AI ยท Mar 24/107
๐Ÿง 

Joint Distribution-Informed Shapley Values for Sparse Counterfactual Explanations

Researchers introduce COLA, a framework that refines counterfactual explanations in AI models by using optimal transport theory and Shapley values to achieve the same prediction changes with 26-45% fewer feature modifications. The method works across different datasets and models to create more actionable and clearer AI explanations.

$NEAR
AINeutralarXiv โ€“ CS AI ยท Mar 24/106
๐Ÿง 

Rough Sets for Explainability of Spectral Graph Clustering

Researchers propose an enhanced methodology using rough set theory to improve explainability of Graph Spectral Clustering (GSC) algorithms. The approach addresses challenges in explaining clustering results, particularly when applied to text documents where spectral space embeddings lack clear relation to content.

โ† PrevPage 3 of 3