75 articles tagged with #explainable-ai. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.
AIBullisharXiv โ CS AI ยท Mar 176/10
๐ง Researchers developed a Hierarchical Takagi-Sugeno-Kang Fuzzy Classifier System that converts opaque deep reinforcement learning agents into human-readable IF-THEN rules, achieving 81.48% fidelity in tests. The framework addresses the critical explainability problem in AI systems used for safety-critical applications by providing interpretable rules that humans can verify and understand.
AIBearisharXiv โ CS AI ยท Mar 176/10
๐ง A new study reveals that standard algorithmic metrics used to evaluate AI counterfactual explanations poorly correlate with human perceptions of explanation quality. The research found weak and dataset-dependent relationships between technical metrics and user judgments, highlighting fundamental limitations in current AI explainability evaluation methods.
AINeutralarXiv โ CS AI ยท Mar 176/10
๐ง Researchers introduce FL-I2MoE, a new Mixture-of-Experts layer for multimodal Transformers that explicitly identifies synergistic and redundant cross-modal feature interactions. The method provides more interpretable explanations for how different data modalities contribute to AI decision-making compared to existing approaches.
AINeutralarXiv โ CS AI ยท Mar 176/10
๐ง Researchers developed a method to compute minimum-size abductive explanations for AI linear models with reject options, addressing a key challenge in explainable AI for critical domains. The approach uses log-linear algorithms for accepted instances and integer linear programming for rejected instances, proving more efficient than existing methods despite theoretical NP-hardness.
AINeutralarXiv โ CS AI ยท Mar 176/10
๐ง Researchers have developed a new AI framework that uses citation-enforced retrieval-augmented generation (RAG) specifically for analyzing tax and fiscal documents. The system prioritizes transparency and explainability for tax authorities, showing improved citation accuracy and reduced AI hallucinations when tested on real IRS documents.
AINeutralarXiv โ CS AI ยท Mar 176/10
๐ง Researchers propose MESD (Multi-category Explanation Stability Disparity), a new metric to detect procedural bias in AI models across intersectional groups. They also introduce UEF framework that balances utility, explanation quality, and fairness in machine learning systems.
AIBullisharXiv โ CS AI ยท Mar 176/10
๐ง Researchers introduce GradCFA, a new hybrid AI explanation framework that combines counterfactual explanations and feature attribution to improve transparency in neural network decisions. The algorithm extends beyond binary classification to multi-class scenarios and demonstrates superior performance in generating feasible, plausible, and diverse explanations compared to existing methods.
AIBullisharXiv โ CS AI ยท Mar 176/10
๐ง Researchers introduce Reason2Decide, a two-stage training framework that improves clinical decision support systems by aligning AI explanations with predictions. The system achieves better performance than larger foundation models while using 40x smaller models, making clinical AI more accessible for resource-constrained deployments.
AINeutralarXiv โ CS AI ยท Mar 176/10
๐ง Researchers introduce 'conceptual views' as a formal framework based on Formal Concept Analysis to globally explain neural networks. Testing on 24 ImageNet models and Fruits-360 datasets shows the framework can faithfully represent models, enable architecture comparison, and extract human-comprehensible rules from neurons.
AIBullisharXiv โ CS AI ยท Mar 166/10
๐ง Researchers introduce Delta1, a framework that integrates automated theorem generation with large language models to create explainable AI reasoning. The system combines formal logic rigor with natural language explanations, demonstrating applications across healthcare, compliance, and regulatory domains.
AIBullisharXiv โ CS AI ยท Mar 126/10
๐ง Researchers introduce FAME (Formal Abstract Minimal Explanations), a new method for explaining neural network decisions that scales to large networks while producing smaller explanations. The approach uses abstract interpretation and dedicated perturbation domains to eliminate irrelevant features and converge to minimal explanations more efficiently than existing methods.
AIBullisharXiv โ CS AI ยท Mar 96/10
๐ง Researchers developed an explainable AI (XAI) system that transforms raw execution traces from LLM-based coding agents into structured, human-interpretable explanations. The system enables users to identify failure root causes 2.8 times faster and propose fixes with 73% higher accuracy through domain-specific failure taxonomy, automatic annotation, and hybrid explanation generation.
AIBullisharXiv โ CS AI ยท Mar 96/10
๐ง Researchers developed DEX-AR, a new explainability method for autoregressive Vision-Language Models that generates 2D heatmaps to understand how these AI systems make decisions. The method addresses challenges in interpreting modern VLMs by analyzing token-by-token generation and visual-textual interactions, showing improved performance across multiple benchmarks.
๐ข Perplexity
AIBullisharXiv โ CS AI ยท Mar 96/10
๐ง Researchers introduce PONTE, a human-in-the-loop framework that creates personalized, trustworthy AI explanations by combining user preference modeling with verification modules. The system addresses the challenge of one-size-fits-all AI explanations by adapting to individual user expertise and cognitive needs while maintaining faithfulness and reducing hallucinations.
AIBullisharXiv โ CS AI ยท Mar 96/10
๐ง Researchers developed an interpretable AI framework for fetal ultrasound image classification that incorporates medical concepts and clinical knowledge. The system uses graph convolutional networks to establish relationships between key medical concepts, providing explanations that align with clinicians' cognitive processes rather than just pixel-level analysis.
AIBullisharXiv โ CS AI ยท Mar 55/10
๐ง Researchers developed DCENWCNet, a deep learning ensemble model that combines three CNN architectures to classify white blood cells with superior accuracy. The model outperforms existing state-of-the-art networks on the Rabbin-WBC dataset and incorporates LIME-based explainability for interpretable medical diagnosis.
AIBullisharXiv โ CS AI ยท Mar 36/109
๐ง Researchers introduced Wild-Drive, a framework for autonomous off-road driving that combines scene captioning and path planning using multimodal AI. The system addresses challenges in harsh weather conditions through robust sensor fusion and efficient large language models, outperforming existing methods in degraded sensing conditions.
AIBullisharXiv โ CS AI ยท Mar 36/104
๐ง Researchers introduce BoxMed-RL, a new AI framework that uses chain-of-thought reasoning and reinforcement learning to generate spatially verifiable radiology reports. The system mimics radiologist workflows by linking visual findings to precise anatomical locations, achieving 7% improvement over existing methods in key performance metrics.
$LINK
AIBearisharXiv โ CS AI ยท Mar 36/103
๐ง Researchers have identified critical failures in Self-explainable Graph Neural Networks (SE-GNNs) where explanations can be completely unrelated to how the models actually make predictions. The study reveals that these degenerate explanations can hide the use of sensitive attributes and can emerge both maliciously and naturally, while existing faithfulness metrics fail to detect them.
AIBullisharXiv โ CS AI ยท Mar 37/1011
๐ง Researchers introduce Dynamic Interaction Graph (DIG), a new framework for understanding and improving collaboration between multiple general-purpose AI agents. DIG captures emergent collaboration as a time-evolving network, making it possible to identify and correct collaboration errors in real-time for the first time.
AIBullisharXiv โ CS AI ยท Mar 37/108
๐ง Researchers introduce CARE, an evidence-grounded agentic framework for medical AI that improves clinical accountability by decomposing tasks into specialized modules rather than using black-box models. The system achieves 10.9% better accuracy than state-of-the-art models by incorporating explicit visual evidence and coordinated reasoning that mimics clinical workflows.
AIBullisharXiv โ CS AI ยท Mar 36/107
๐ง Researchers developed ThreatFormer-IDS, a Transformer-based intrusion detection system that achieves robust cybersecurity monitoring for IoT and industrial networks. The system demonstrates superior performance in detecting zero-day attacks while providing explainable threat attribution, achieving 99.4% AUC-ROC on benchmark tests.
AIBullisharXiv โ CS AI ยท Mar 36/108
๐ง Researchers have developed ESENSC_rev2, a polynomial-time alternative to SHAP for AI feature attribution that offers similar accuracy with significantly improved computational efficiency. The method uses cooperative game theory and provides theoretical foundations through axiomatic characterization, making it suitable for high-dimensional explainability tasks.
AIBullisharXiv โ CS AI ยท Mar 36/106
๐ง Researchers propose BiCAM, a new method for interpreting Vision Transformer (ViT) decisions that captures both positive and negative contributions to predictions. The approach improves explanation quality and enables adversarial example detection across multiple ViT variants without requiring model retraining.
AIBullisharXiv โ CS AI ยท Mar 36/107
๐ง Researchers have developed QIME, a new framework for creating interpretable medical text embeddings that uses ontology-grounded questions to represent biomedical text. Unlike black-box AI models, QIME provides clinically meaningful explanations while achieving performance close to traditional dense embeddings in medical text analysis tasks.