y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto

#explainability News & Analysis

12 articles tagged with #explainability. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.

12 articles
AINeutralarXiv โ€“ CS AI ยท Apr 77/10
๐Ÿง 

Justified or Just Convincing? Error Verifiability as a Dimension of LLM Quality

Researchers introduce 'error verifiability' as a new metric to measure whether AI-generated justifications help users distinguish correct from incorrect answers. The study found that common AI improvement methods don't enhance verifiability, but two new domain-specific approaches successfully improved users' ability to assess answer correctness.

AINeutralarXiv โ€“ CS AI ยท 1d ago6/10
๐Ÿง 

TRUST Agents: A Collaborative Multi-Agent Framework for Fake News Detection, Explainable Verification, and Logic-Aware Claim Reasoning

TRUST Agents is a multi-agent AI framework designed to improve fake news detection and fact verification by combining claim extraction, evidence retrieval, verification, and explainable reasoning. Unlike binary classification approaches, the system generates transparent, human-inspectable reports with logic-aware reasoning for complex claims, though it shows that retrieval quality and uncertainty calibration remain significant challenges in automated fact verification.

AINeutralarXiv โ€“ CS AI ยท 1d ago6/10
๐Ÿง 

FaCT: Faithful Concept Traces for Explaining Neural Network Decisions

Researchers introduce FaCT, a new approach for explaining neural network decisions through faithful concept-based explanations that don't rely on restrictive assumptions about how models learn. The method includes a new evaluation metric (Cยฒ-Score) and demonstrates improved interpretability while maintaining competitive performance on ImageNet.

AINeutralarXiv โ€“ CS AI ยท 2d ago6/10
๐Ÿง 

Diffusion-CAM: Faithful Visual Explanations for dMLLMs

Researchers introduce Diffusion-CAM, a novel interpretability method designed specifically for diffusion-based Multimodal Large Language Models (dMLLMs). Unlike existing visualization techniques optimized for sequential models, this approach accounts for the parallel denoising process inherent to diffusion architectures, achieving superior localization accuracy and visual fidelity in model explanations.

AINeutralarXiv โ€“ CS AI ยท 2d ago6/10
๐Ÿง 

Explainability and Certification of AI-Generated Educational Assessments

Researchers propose a comprehensive framework for making AI-generated educational assessments transparent, explainable, and certifiable through self-rationalization, attribution analysis, and post-hoc verification. The framework introduces a metadata schema and traffic-light certification workflow designed to meet institutional accreditation standards, with proof-of-concept testing on 500 computer science questions demonstrating improved transparency and reduced instructor workload.

AINeutralarXiv โ€“ CS AI ยท 2d ago6/10
๐Ÿง 

Assessing Model-Agnostic XAI Methods against EU AI Act Explainability Requirements

Researchers have developed a framework to assess how well existing explainable AI (XAI) methods comply with the EU AI Act's transparency requirements. The study bridges the gap between current XAI techniques and regulatory mandates by proposing a scoring system that translates expert qualitative assessments into quantitative compliance metrics, helping practitioners navigate AI regulation in European markets.

AINeutralarXiv โ€“ CS AI ยท 2d ago6/10
๐Ÿง 

Adoption and Effectiveness of AI-Based Anomaly Detection for Cross Provider Health Data Exchange

A research study presents a readiness framework and practical deployment strategy for AI-based anomaly detection in multi-provider healthcare environments. The research combines organizational assessment criteria with machine learning performance evaluation, demonstrating that hybrid rule-based and isolation forest approaches optimize both detection coverage and alert efficiency in cross-provider EHR systems.

AIBullisharXiv โ€“ CS AI ยท 2d ago6/10
๐Ÿง 

PoTable: Towards Systematic Thinking via Plan-then-Execute Stage Reasoning on Tables

Researchers introduce PoTable, a novel AI framework that enhances Large Language Models' ability to reason about tabular data through systematic, stage-oriented planning before execution. The approach mimics professional data analyst workflows by breaking complex table reasoning into distinct analytical stages with clear objectives, demonstrating improved accuracy and explainability across benchmark datasets.

AINeutralarXiv โ€“ CS AI ยท Apr 76/10
๐Ÿง 

AI Governance Control Stack for Operational Stability: Achieving Hardened Governance in AI Systems

Researchers propose a six-layer AI Governance Control Stack for Operational Stability to ensure traceable and resilient AI system behavior in high-stakes environments. The framework integrates version control, verification, explainability logging, monitoring, drift detection, and escalation mechanisms while aligning with emerging regulatory frameworks like the EU AI Act and NIST standards.

AIBearisharXiv โ€“ CS AI ยท Mar 176/10
๐Ÿง 

The Scenic Route to Deception: Dark Patterns and Explainability Pitfalls in Conversational Navigation

Researchers warn that AI-powered conversational navigation systems using Large Language Models could transform route guidance from verifiable geometric tasks into manipulative dialogues. The study proposes a framework categorizing risks as dark patterns or explainability pitfalls, suggesting neuro-symbolic architectures to maintain trustworthiness.

AINeutralarXiv โ€“ CS AI ยท Mar 166/10
๐Ÿง 

Causality Is Key to Understand and Balance Multiple Goals in Trustworthy ML and Foundation Models

Researchers propose integrating causal methods into machine learning systems to balance competing objectives like fairness, privacy, robustness, accuracy, and explainability. The paper argues that addressing these principles in isolation leads to conflicts and suboptimal solutions, while causal approaches can help navigate trade-offs in both trustworthy ML and foundation models.

AINeutralarXiv โ€“ CS AI ยท Mar 25/106
๐Ÿง 

fEDM+: A Risk-Based Fuzzy Ethical Decision Making Framework with Principle-Level Explainability and Pluralistic Validation

Researchers have introduced fEDM+, an enhanced fuzzy ethical decision-making framework for AI systems that provides principle-level explainability and validates decisions against multiple stakeholder perspectives. The framework extends the original fEDM by adding transparent explanations of ethical decisions and replacing single-point validation with pluralistic validation that accommodates different ethical viewpoints.