75 articles tagged with #explainable-ai. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.
AINeutralarXiv – CS AI · Mar 277/10
🧠A user study with 200 participants found that while explanation correctness in AI systems affects human understanding, the relationship is not linear - performance drops significantly at 70% correctness but doesn't degrade further below that threshold. The research challenges assumptions that higher computational correctness metrics automatically translate to better human comprehension of AI decisions.
AIBullisharXiv – CS AI · Mar 177/10
🧠Researchers developed FairMed-XGB, a machine learning framework that reduces gender bias in healthcare AI models by 40-72% while maintaining predictive accuracy. The system uses Bayesian optimization and explainable AI to ensure equitable treatment decisions in critical care settings.
AINeutralarXiv – CS AI · Mar 177/10
🧠A research paper argues that the most valuable capabilities of large language models are precisely those that cannot be captured by human-readable rules. The thesis is supported by proof showing that if LLM capabilities could be fully rule-encoded, they would be equivalent to expert systems, which have been proven historically weaker than LLMs.
AIBullisharXiv – CS AI · Mar 127/10
🧠Researchers have developed a new method to detect and eliminate backdoor triggers in neural networks using active path analysis. The approach shows promising results in experiments with machine learning models used for intrusion detection, addressing a critical cybersecurity vulnerability.
AINeutralarXiv – CS AI · Mar 97/10
🧠Researchers demonstrate that traditional explainable AI methods designed for static predictions fail when applied to agentic AI systems that make sequential decisions over time. The study shows attribution-based explanations work well for static tasks but trace-based diagnostics are needed to understand failures in multi-step AI agent behaviors.
AIBullisharXiv – CS AI · Mar 97/10
🧠Researchers introduce RAG-Driver, a retrieval-augmented multi-modal large language model designed for autonomous driving that can provide explainable decisions and control predictions. The system addresses data scarcity and generalization challenges in AI-driven autonomous vehicles by using in-context learning and expert demonstration retrieval.
AINeutralarXiv – CS AI · Mar 46/102
🧠Researchers propose PURE, a new framework for AI-powered recommendation systems that addresses preference-inconsistent explanations - where AI provides factually correct but unconvincing reasoning that conflicts with user preferences. The system uses a select-then-generate approach to improve both evidence selection and explanation generation, demonstrating reduced hallucinations while maintaining recommendation accuracy.
AIBullisharXiv – CS AI · Mar 46/103
🧠Researchers developed a Neuro-Symbolic Agentic Framework combining machine learning with LLM-based reasoning to predict colorectal cancer drug responses. The system achieved significant predictive accuracy (r=0.504) and introduces 'Inverse Reasoning' for simulating genomic edits to predict drug sensitivity changes.
AIBullisharXiv – CS AI · Mar 46/103
🧠Researchers developed COOL-MC, a tool that combines reinforcement learning with model checking to verify and explain AI policies for platelet inventory management in blood banks. The system achieved a 2.9% stockout probability while providing transparent decision-making explanations for safety-critical healthcare applications.
AINeutralarXiv – CS AI · Mar 37/104
🧠Researchers developed a new graph concept bottleneck layer (GCBM) that can be integrated into Graph Neural Networks to make their decision-making process more interpretable. The method treats graph concepts as 'words' and uses language models to improve understanding of how GNNs make predictions, achieving state-of-the-art performance in both classification accuracy and interpretability.
AIBullisharXiv – CS AI · Mar 37/105
🧠Researchers have developed DeepMedix-R1, a foundation model for chest X-ray interpretation that provides transparent, step-by-step reasoning alongside accurate diagnoses to address the black-box problem in medical AI. The model uses reinforcement learning to align diagnostic outputs with clinical plausibility and significantly outperforms existing models in report generation and visual question answering tasks.
AINeutralarXiv – CS AI · Feb 277/105
🧠Researchers propose Geodesic Integrated Gradients (GIG), a new method for explaining AI model decisions that uses curved paths instead of straight lines to compute feature importance. The method addresses flawed attributions in existing approaches by integrating gradients along geodesic paths under a model-induced Riemannian metric.
AIBullishOpenAI News · May 97/106
🧠Researchers used GPT-4 to automatically generate explanations for how individual neurons behave in large language models and to evaluate the quality of those explanations. They have released a comprehensive dataset containing explanations and quality scores for every neuron in GPT-2, advancing AI interpretability research.
AIBullisharXiv – CS AI · 6d ago6/10
🧠Researchers introduce MAT-Cell, a neuro-symbolic AI framework that combines large language models with biological constraints to improve single-cell annotation accuracy. The system uses multi-agent reasoning and verification processes to overcome limitations in both supervised learning and LLM-based approaches, demonstrating superior performance on cross-species benchmarks.
AINeutralarXiv – CS AI · 6d ago6/10
🧠Researchers propose G-Defense, a graph-enhanced framework that uses large language models and retrieval-augmented generation to detect fake news while providing explainable, fine-grained reasoning. The system decomposes news claims into sub-claims, retrieves competing evidence, and generates transparent explanations without requiring verified fact-checking databases.
AINeutralarXiv – CS AI · 6d ago6/10
🧠Researchers introduce chain-of-illocution (CoI) prompting to improve source faithfulness in retrieval-augmented language models, achieving up to 63% gains in source adherence for programming education tasks. The study reveals that standard RAG systems exhibit low fidelity to source materials, with non-RAG models performing worse, while a user study confirms improved faithfulness does not compromise user satisfaction.
AINeutralarXiv – CS AI · 6d ago6/10
🧠Researchers introduce REVEAL, an explainable AI framework for detecting AI-generated images through forensic evidence chains and expert-grounded reinforcement learning. The approach addresses the growing challenge of distinguishing synthetic images from authentic ones while providing transparent, verifiable reasoning for detection decisions.
AINeutralarXiv – CS AI · 6d ago6/10
🧠Researchers propose an attribution-driven approach to make encoder-based Large Language Models more transparent and trustworthy for network intrusion detection in Software-Defined Networks. By analyzing which traffic features drive model decisions, the study demonstrates that LLMs learn legitimate attack behavior patterns, addressing a critical barrier to deploying AI security tools in sensitive environments.
AINeutralarXiv – CS AI · 6d ago6/10
🧠Researchers propose using Inductive Learning of Answer Set Programs (ILASP) to create interpretable approximations of neural networks trained on preference learning tasks. The approach combines dimensionality reduction through Principal Component Analysis with logic-based explanations, addressing the challenge of explaining black-box AI models while maintaining computational efficiency.
AIBullisharXiv – CS AI · Apr 76/10
🧠Researchers developed a lightweight framework that uses ontological definitions to provide modular and explainable control over Large Language Model outputs in conversational systems. The method fine-tunes LLMs to generate content according to specific constraints like English proficiency level and content polarity, consistently outperforming pre-trained baselines across seven state-of-the-art models.
AINeutralarXiv – CS AI · Apr 76/10
🧠Researchers propose a new metric to assess consistency of AI model explanations across similar inputs, implementing it on BERT models for sentiment analysis. The framework uses cosine similarity of SHAP values to detect inconsistent reasoning patterns and biased feature reliance, providing more robust evaluation of model behavior.
AIBullisharXiv – CS AI · Apr 66/10
🧠Researchers have developed HIL-CBM, a new hierarchical interpretable AI model that enhances explainability by mimicking human cognitive processes across multiple semantic levels. The model outperforms existing Concept Bottleneck Models in classification accuracy while providing more interpretable explanations without requiring manual concept annotations.
AIBullisharXiv – CS AI · Apr 66/10
🧠Researchers have developed ForgeryGPT, a new multimodal AI framework that can detect, localize, and explain image forgeries through natural language interaction. The system combines advanced computer vision techniques with large language models to provide interpretable analysis of tampered images, addressing limitations in current forgery detection methods.
🧠 GPT-4
AIBullisharXiv – CS AI · Mar 266/10
🧠Researchers introduce Distance Explainer, a new method for explaining how AI models make decisions in embedded vector spaces by identifying which features contribute to similarity between data points. The technique adapts existing explainability methods to work with complex multi-modal embeddings like image-caption pairs, addressing a critical gap in AI interpretability research.
AIBearisharXiv – CS AI · Mar 176/10
🧠A new study reveals that standard algorithmic metrics used to evaluate AI counterfactual explanations poorly correlate with human perceptions of explanation quality. The research found weak and dataset-dependent relationships between technical metrics and user judgments, highlighting fundamental limitations in current AI explainability evaluation methods.