y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto

#explainable-ai News & Analysis

75 articles tagged with #explainable-ai. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.

75 articles
AIBullisharXiv โ€“ CS AI ยท Mar 176/10
๐Ÿง 

Distilling Deep Reinforcement Learning into Interpretable Fuzzy Rules: An Explainable AI Framework

Researchers developed a Hierarchical Takagi-Sugeno-Kang Fuzzy Classifier System that converts opaque deep reinforcement learning agents into human-readable IF-THEN rules, achieving 81.48% fidelity in tests. The framework addresses the critical explainability problem in AI systems used for safety-critical applications by providing interpretable rules that humans can verify and understand.

AIBearisharXiv โ€“ CS AI ยท Mar 176/10
๐Ÿง 

Do Metrics for Counterfactual Explanations Align with User Perception?

A new study reveals that standard algorithmic metrics used to evaluate AI counterfactual explanations poorly correlate with human perceptions of explanation quality. The research found weak and dataset-dependent relationships between technical metrics and user judgments, highlighting fundamental limitations in current AI explainability evaluation methods.

AINeutralarXiv โ€“ CS AI ยท Mar 176/10
๐Ÿง 

Feature-level Interaction Explanations in Multimodal Transformers

Researchers introduce FL-I2MoE, a new Mixture-of-Experts layer for multimodal Transformers that explicitly identifies synergistic and redundant cross-modal feature interactions. The method provides more interpretable explanations for how different data modalities contribute to AI decision-making compared to existing approaches.

AINeutralarXiv โ€“ CS AI ยท Mar 176/10
๐Ÿง 

Concisely Explaining the Doubt: Minimum-Size Abductive Explanations for Linear Models with a Reject Option

Researchers developed a method to compute minimum-size abductive explanations for AI linear models with reject options, addressing a key challenge in explainable AI for critical domains. The approach uses log-linear algorithms for accepted instances and integer linear programming for rejected instances, proving more efficient than existing methods despite theoretical NP-hardness.

AINeutralarXiv โ€“ CS AI ยท Mar 176/10
๐Ÿง 

Citation-Enforced RAG for Fiscal Document Intelligence: Cited, Explainable Knowledge Retrieval in Tax Compliance

Researchers have developed a new AI framework that uses citation-enforced retrieval-augmented generation (RAG) specifically for analyzing tax and fiscal documents. The system prioritizes transparency and explainability for tax authorities, showing improved citation accuracy and reduced AI hallucinations when tested on real IRS documents.

AINeutralarXiv โ€“ CS AI ยท Mar 176/10
๐Ÿง 

MESD: Detecting and Mitigating Procedural Bias in Intersectional Groups

Researchers propose MESD (Multi-category Explanation Stability Disparity), a new metric to detect procedural bias in AI models across intersectional groups. They also introduce UEF framework that balances utility, explanation quality, and fairness in machine learning systems.

AIBullisharXiv โ€“ CS AI ยท Mar 176/10
๐Ÿง 

GradCFA: A Hybrid Gradient-Based Counterfactual and Feature Attribution Explanation Algorithm for Local Interpretation of Neural Networks

Researchers introduce GradCFA, a new hybrid AI explanation framework that combines counterfactual explanations and feature attribution to improve transparency in neural network decisions. The algorithm extends beyond binary classification to multi-class scenarios and demonstrates superior performance in generating feasible, plausible, and diverse explanations compared to existing methods.

AIBullisharXiv โ€“ CS AI ยท Mar 176/10
๐Ÿง 

Reason2Decide: Rationale-Driven Multi-Task Learning

Researchers introduce Reason2Decide, a two-stage training framework that improves clinical decision support systems by aligning AI explanations with predictions. The system achieves better performance than larger foundation models while using 40x smaller models, making clinical AI more accessible for resource-constrained deployments.

AINeutralarXiv โ€“ CS AI ยท Mar 176/10
๐Ÿง 

Conceptual Views of Neural Networks: A Framework for Neuro-Symbolic Analysis

Researchers introduce 'conceptual views' as a formal framework based on Formal Concept Analysis to globally explain neural networks. Testing on 24 ImageNet models and Fruits-360 datasets shows the framework can faithfully represent models, enable architecture comparison, and extract human-comprehensible rules from neurons.

AIBullisharXiv โ€“ CS AI ยท Mar 166/10
๐Ÿง 

Delta1 with LLM: symbolic and neural integration for credible and explainable reasoning

Researchers introduce Delta1, a framework that integrates automated theorem generation with large language models to create explainable AI reasoning. The system combines formal logic rigor with natural language explanations, demonstrating applications across healthcare, compliance, and regulatory domains.

AIBullisharXiv โ€“ CS AI ยท Mar 126/10
๐Ÿง 

FAME: Formal Abstract Minimal Explanation for Neural Networks

Researchers introduce FAME (Formal Abstract Minimal Explanations), a new method for explaining neural network decisions that scales to large networks while producing smaller explanations. The approach uses abstract interpretation and dedicated perturbation domains to eliminate irrelevant features and converge to minimal explanations more efficiently than existing methods.

AIBullisharXiv โ€“ CS AI ยท Mar 96/10
๐Ÿง 

XAI for Coding Agent Failures: Transforming Raw Execution Traces into Actionable Insights

Researchers developed an explainable AI (XAI) system that transforms raw execution traces from LLM-based coding agents into structured, human-interpretable explanations. The system enables users to identify failure root causes 2.8 times faster and propose fixes with 73% higher accuracy through domain-specific failure taxonomy, automatic annotation, and hybrid explanation generation.

AIBullisharXiv โ€“ CS AI ยท Mar 96/10
๐Ÿง 

DEX-AR: A Dynamic Explainability Method for Autoregressive Vision-Language Models

Researchers developed DEX-AR, a new explainability method for autoregressive Vision-Language Models that generates 2D heatmaps to understand how these AI systems make decisions. The method addresses challenges in interpreting modern VLMs by analyzing token-by-token generation and visual-textual interactions, showing improved performance across multiple benchmarks.

๐Ÿข Perplexity
AIBullisharXiv โ€“ CS AI ยท Mar 96/10
๐Ÿง 

PONTE: Personalized Orchestration for Natural Language Trustworthy Explanations

Researchers introduce PONTE, a human-in-the-loop framework that creates personalized, trustworthy AI explanations by combining user preference modeling with verification modules. The system addresses the challenge of one-size-fits-all AI explanations by adapting to individual user expertise and cognitive needs while maintaining faithfulness and reducing hallucinations.

AIBullisharXiv โ€“ CS AI ยท Mar 96/10
๐Ÿง 

A Cognitive Explainer for Fetal ultrasound images classifier Based on Medical Concepts

Researchers developed an interpretable AI framework for fetal ultrasound image classification that incorporates medical concepts and clinical knowledge. The system uses graph convolutional networks to establish relationships between key medical concepts, providing explanations that align with clinicians' cognitive processes rather than just pixel-level analysis.

AIBullisharXiv โ€“ CS AI ยท Mar 36/109
๐Ÿง 

Wild-Drive: Off-Road Scene Captioning and Path Planning via Robust Multi-modal Routing and Efficient Large Language Model

Researchers introduced Wild-Drive, a framework for autonomous off-road driving that combines scene captioning and path planning using multimodal AI. The system addresses challenges in harsh weather conditions through robust sensor fusion and efficient large language models, outperforming existing methods in degraded sensing conditions.

AIBullisharXiv โ€“ CS AI ยท Mar 36/104
๐Ÿง 

Reason Like a Radiologist: Chain-of-Thought and Reinforcement Learning for Verifiable Report Generation

Researchers introduce BoxMed-RL, a new AI framework that uses chain-of-thought reasoning and reinforcement learning to generate spatially verifiable radiology reports. The system mimics radiologist workflows by linking visual findings to precise anatomical locations, achieving 7% improvement over existing methods in key performance metrics.

$LINK
AIBearisharXiv โ€“ CS AI ยท Mar 36/103
๐Ÿง 

GNN Explanations that do not Explain and How to find Them

Researchers have identified critical failures in Self-explainable Graph Neural Networks (SE-GNNs) where explanations can be completely unrelated to how the models actually make predictions. The study reveals that these degenerate explanations can hide the use of sensitive attributes and can emerge both maliciously and naturally, while existing faithfulness metrics fail to detect them.

AIBullisharXiv โ€“ CS AI ยท Mar 37/108
๐Ÿง 

CARE: Towards Clinical Accountability in Multi-Modal Medical Reasoning with an Evidence-Grounded Agentic Framework

Researchers introduce CARE, an evidence-grounded agentic framework for medical AI that improves clinical accountability by decomposing tasks into specialized modules rather than using black-box models. The system achieves 10.9% better accuracy than state-of-the-art models by incorporating explicit visual evidence and coordinated reasoning that mimics clinical workflows.

AIBullisharXiv โ€“ CS AI ยท Mar 36/107
๐Ÿง 

ThreatFormer-IDS: Robust Transformer Intrusion Detection with Zero-Day Generalization and Explainable Attribution

Researchers developed ThreatFormer-IDS, a Transformer-based intrusion detection system that achieves robust cybersecurity monitoring for IoT and industrial networks. The system demonstrates superior performance in detecting zero-day attacks while providing explainable threat attribution, achieving 99.4% AUC-ROC on benchmark tests.

AIBullisharXiv โ€“ CS AI ยท Mar 36/108
๐Ÿง 

A Polynomial-Time Axiomatic Alternative to SHAP for Feature Attribution

Researchers have developed ESENSC_rev2, a polynomial-time alternative to SHAP for AI feature attribution that offers similar accuracy with significantly improved computational efficiency. The method uses cooperative game theory and provides theoretical foundations through axiomatic characterization, making it suitable for high-dimensional explainability tasks.

AIBullisharXiv โ€“ CS AI ยท Mar 36/106
๐Ÿง 

What Helps -- and What Hurts: Bidirectional Explanations for Vision Transformers

Researchers propose BiCAM, a new method for interpreting Vision Transformer (ViT) decisions that captures both positive and negative contributions to predictions. The approach improves explanation quality and enables adversarial example detection across multiple ViT variants without requiring model retraining.

AIBullisharXiv โ€“ CS AI ยท Mar 36/107
๐Ÿง 

QIME: Constructing Interpretable Medical Text Embeddings via Ontology-Grounded Questions

Researchers have developed QIME, a new framework for creating interpretable medical text embeddings that uses ontology-grounded questions to represent biomedical text. Unlike black-box AI models, QIME provides clinically meaningful explanations while achieving performance close to traditional dense embeddings in medical text analysis tasks.