75 articles tagged with #explainable-ai. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.
AIBearisharXiv โ CS AI ยท Mar 36/103
๐ง Researchers have identified critical failures in Self-explainable Graph Neural Networks (SE-GNNs) where explanations can be completely unrelated to how the models actually make predictions. The study reveals that these degenerate explanations can hide the use of sensitive attributes and can emerge both maliciously and naturally, while existing faithfulness metrics fail to detect them.
AIBullisharXiv โ CS AI ยท Mar 26/1014
๐ง Researchers propose an efficient unsupervised federated learning framework for anomaly detection in heterogeneous IoT networks that preserves privacy while leveraging shared features from multiple datasets. The approach uses explainable AI techniques like SHAP for transparency and demonstrates superior performance compared to conventional federated learning methods on real-world IoT datasets.
AIBullisharXiv โ CS AI ยท Feb 276/105
๐ง Researchers developed a lightweight intrusion detection system using XGBoost and explainable AI to detect Advanced Persistent Threats (APTs) at early stages. The system reduced required features from 77 to just 4 while maintaining 97% precision and 100% recall performance.
$APT
AINeutralarXiv โ CS AI ยท Mar 264/10
๐ง Researchers propose a new framework for evaluating uncertainty attribution methods in explainable AI, addressing inconsistent evaluation practices in the field. The study introduces five key properties including a new 'conveyance' metric and demonstrates that gradient-based methods outperform perturbation-based approaches across multiple evaluation criteria.
AINeutralarXiv โ CS AI ยท Mar 175/10
๐ง Researchers propose a formal abductive explanation framework to analyze AI predictions of mental health help-seeking in tech workplaces. The framework aims to provide rigorous justifications for model outputs while examining the influence of sensitive attributes like gender to ensure fairness in AI-driven mental health interventions.
AINeutralarXiv โ CS AI ยท Mar 174/10
๐ง Researchers have developed SyMPLER, an explainable AI model for time series forecasting that uses dynamic piecewise-linear approximations to handle nonstationary environments. The model automatically determines when to add new local models based on prediction errors using Statistical Learning Theory, achieving comparable performance to black-box models while maintaining interpretability.
AINeutralarXiv โ CS AI ยท Mar 174/10
๐ง Researchers introduce EAGLE, a new framework for explaining black-box machine learning models using information-theoretic active learning to select optimal data perturbations. The method produces feature importance scores with uncertainty estimates and demonstrates improved explanation reproducibility and stability compared to existing approaches like LIME.
AINeutralarXiv โ CS AI ยท Mar 174/10
๐ง Researchers developed a new method for converting random forest classifiers into circuit representations that enables more efficient computation of decision explanations. The approach provides tools for computing robustness metrics and identifying ways to alter classifier decisions, with applications in explainable AI (XAI).
AINeutralarXiv โ CS AI ยท Mar 175/10
๐ง Researchers developed a privacy-preserving method using SHAP entropy regularization to protect sensitive user data in explainable AI systems for smart home IoT applications. The approach reduces privacy leakage while maintaining model accuracy and explanation quality.
AIBullisharXiv โ CS AI ยท Mar 95/10
๐ง Researchers introduce CLAIRE, a deep learning framework that combines unsupervised autoencoders with supervised classification for fault detection in industrial manufacturing. The system transforms high-dimensional sensor data into compact representations and uses explainable AI techniques to identify key features contributing to fault predictions.
AINeutralarXiv โ CS AI ยท Mar 54/10
๐ง Researchers introduce WeightLens and CircuitLens, two new methods for analyzing neural network interpretability that go beyond traditional activation-based approaches. These tools aim to provide more systematic and scalable analysis of neural network circuits by interpreting features directly from weights and capturing feature interactions.
AINeutralarXiv โ CS AI ยท Mar 44/103
๐ง This academic survey examines Neuro-Symbolic AI methods that combine neural networks with symbolic computing to enhance explainability and reasoning capabilities. The research explores how these hybrid approaches can address limitations in semantic generalizability and compete with pure connectionist systems in real-world applications.
AINeutralarXiv โ CS AI ยท Mar 44/102
๐ง Researchers propose Diffusion-EXR, a new AI model that uses Denoising Diffusion Probabilistic Models (DDPM) to generate review text for explainable recommendation systems. The model corrupts review embeddings with Gaussian noise and learns to reconstruct them, achieving state-of-the-art performance on benchmark datasets for recommendation review generation.
AIBullisharXiv โ CS AI ยท Mar 35/105
๐ง Researchers conducted a mixed-methods study evaluating an explainable AI system for analyzing healthcare reviews, surveying 60 participants and conducting expert interviews. The study found strong demand for AI transparency in healthcare decision-making, with 82% of respondents saying they want to understand AI classification reasoning and 84% considering explainability important for trust.
$OP
AINeutralarXiv โ CS AI ยท Mar 34/104
๐ง Researchers have developed a new Explainable AI method that makes Wasserstein distances more interpretable by attributing distance calculations to specific data components like subgroups and features. The framework enables better analysis of dataset shifts and transport phenomena across diverse applications with high accuracy.
AINeutralarXiv โ CS AI ยท Mar 25/108
๐ง Researchers introduce Hierarchical Concept Embedding Models (HiCEMs), a new approach to make deep neural networks more interpretable by modeling relationships between concepts in hierarchical structures. The method includes Concept Splitting to automatically discover fine-grained sub-concepts without additional annotations, reducing the burden of manual labeling while improving model accuracy and interpretability.
AINeutralarXiv โ CS AI ยท Feb 274/108
๐ง Researchers evaluated seven pre-trained CNN architectures for IoT DDoS attack detection, finding that DenseNet and MobileNet models provide the best balance of accuracy, reliability, and interpretability under resource constraints. The study emphasizes the importance of combining performance metrics with explainability when deploying AI security models in IoT environments.
AINeutralarXiv โ CS AI ยท Feb 274/106
๐ง Researchers developed MEDNA-DFM, a dual-view deep learning model that predicts DNA methylation patterns while providing biological explanations. The model achieves high accuracy across species and includes explainable AI features that reveal conserved genetic motifs and cooperative sequence-structure relationships.
AINeutralLil'Log (Lilian Weng) ยท Aug 15/10
๐ง Machine learning models are increasingly being deployed in critical sectors including healthcare, justice systems, and financial services. This necessitates the development of model interpretability methods to understand how AI systems make decisions and ensure compliance with ethical and legal requirements.
AINeutralarXiv โ CS AI ยท Mar 34/104
๐ง Researchers developed a new method for explaining satellite mission planning decisions using solver-grounded certificates that directly derive explanations from optimization models. The approach achieves perfect accuracy in explaining why scheduling requests are accepted or rejected, outperforming traditional post-hoc explanation methods that produce non-causal attributions 29% of the time.
AIBullisharXiv โ CS AI ยท Mar 34/105
๐ง Researchers published an extended validation study of the Explainability Solution Space (ESS) framework, demonstrating its effectiveness across different domains including urban resource allocation systems. The study confirms ESS can systematically adapt to various governance roles and stakeholder configurations, positioning it as a generalizable tool for explainable AI strategy design.
AINeutralarXiv โ CS AI ยท Mar 34/105
๐ง Researchers introduce strength change explanations for quantitative argumentation graphs to make AI inference systems more contestable and explainable. The method describes how to modify argument strengths to achieve desired outcomes and demonstrates applications through heuristic search on layered graphs.
AINeutralarXiv โ CS AI ยท Mar 24/106
๐ง Researchers have developed ArgLLM-App, a web-based system that uses Large Language Models for argumentative reasoning in decision-making tasks. The system allows human users to visualize explanations and contest reasoning mistakes, making AI decisions more transparent and contestable.
AIBullisharXiv โ CS AI ยท Mar 24/107
๐ง Researchers introduce COLA, a framework that refines counterfactual explanations in AI models by using optimal transport theory and Shapley values to achieve the same prediction changes with 26-45% fewer feature modifications. The method works across different datasets and models to create more actionable and clearer AI explanations.
$NEAR
AINeutralarXiv โ CS AI ยท Mar 24/106
๐ง Researchers propose an enhanced methodology using rough set theory to improve explainability of Graph Spectral Clustering (GSC) algorithms. The approach addresses challenges in explaining clustering results, particularly when applied to text documents where spectral space embeddings lack clear relation to content.