12 articles tagged with #explainability. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.
AINeutralarXiv โ CS AI ยท Apr 77/10
๐ง Researchers introduce 'error verifiability' as a new metric to measure whether AI-generated justifications help users distinguish correct from incorrect answers. The study found that common AI improvement methods don't enhance verifiability, but two new domain-specific approaches successfully improved users' ability to assess answer correctness.
AINeutralarXiv โ CS AI ยท 1d ago6/10
๐ง TRUST Agents is a multi-agent AI framework designed to improve fake news detection and fact verification by combining claim extraction, evidence retrieval, verification, and explainable reasoning. Unlike binary classification approaches, the system generates transparent, human-inspectable reports with logic-aware reasoning for complex claims, though it shows that retrieval quality and uncertainty calibration remain significant challenges in automated fact verification.
AINeutralarXiv โ CS AI ยท 1d ago6/10
๐ง Researchers introduce FaCT, a new approach for explaining neural network decisions through faithful concept-based explanations that don't rely on restrictive assumptions about how models learn. The method includes a new evaluation metric (Cยฒ-Score) and demonstrates improved interpretability while maintaining competitive performance on ImageNet.
AINeutralarXiv โ CS AI ยท 2d ago6/10
๐ง Researchers introduce Diffusion-CAM, a novel interpretability method designed specifically for diffusion-based Multimodal Large Language Models (dMLLMs). Unlike existing visualization techniques optimized for sequential models, this approach accounts for the parallel denoising process inherent to diffusion architectures, achieving superior localization accuracy and visual fidelity in model explanations.
AINeutralarXiv โ CS AI ยท 2d ago6/10
๐ง Researchers propose a comprehensive framework for making AI-generated educational assessments transparent, explainable, and certifiable through self-rationalization, attribution analysis, and post-hoc verification. The framework introduces a metadata schema and traffic-light certification workflow designed to meet institutional accreditation standards, with proof-of-concept testing on 500 computer science questions demonstrating improved transparency and reduced instructor workload.
AINeutralarXiv โ CS AI ยท 2d ago6/10
๐ง Researchers have developed a framework to assess how well existing explainable AI (XAI) methods comply with the EU AI Act's transparency requirements. The study bridges the gap between current XAI techniques and regulatory mandates by proposing a scoring system that translates expert qualitative assessments into quantitative compliance metrics, helping practitioners navigate AI regulation in European markets.
AINeutralarXiv โ CS AI ยท 2d ago6/10
๐ง A research study presents a readiness framework and practical deployment strategy for AI-based anomaly detection in multi-provider healthcare environments. The research combines organizational assessment criteria with machine learning performance evaluation, demonstrating that hybrid rule-based and isolation forest approaches optimize both detection coverage and alert efficiency in cross-provider EHR systems.
AIBullisharXiv โ CS AI ยท 2d ago6/10
๐ง Researchers introduce PoTable, a novel AI framework that enhances Large Language Models' ability to reason about tabular data through systematic, stage-oriented planning before execution. The approach mimics professional data analyst workflows by breaking complex table reasoning into distinct analytical stages with clear objectives, demonstrating improved accuracy and explainability across benchmark datasets.
AINeutralarXiv โ CS AI ยท Apr 76/10
๐ง Researchers propose a six-layer AI Governance Control Stack for Operational Stability to ensure traceable and resilient AI system behavior in high-stakes environments. The framework integrates version control, verification, explainability logging, monitoring, drift detection, and escalation mechanisms while aligning with emerging regulatory frameworks like the EU AI Act and NIST standards.
AIBearisharXiv โ CS AI ยท Mar 176/10
๐ง Researchers warn that AI-powered conversational navigation systems using Large Language Models could transform route guidance from verifiable geometric tasks into manipulative dialogues. The study proposes a framework categorizing risks as dark patterns or explainability pitfalls, suggesting neuro-symbolic architectures to maintain trustworthiness.
AINeutralarXiv โ CS AI ยท Mar 166/10
๐ง Researchers propose integrating causal methods into machine learning systems to balance competing objectives like fairness, privacy, robustness, accuracy, and explainability. The paper argues that addressing these principles in isolation leads to conflicts and suboptimal solutions, while causal approaches can help navigate trade-offs in both trustworthy ML and foundation models.
AINeutralarXiv โ CS AI ยท Mar 25/106
๐ง Researchers have introduced fEDM+, an enhanced fuzzy ethical decision-making framework for AI systems that provides principle-level explainability and validates decisions against multiple stakeholder perspectives. The framework extends the original fEDM by adding transparent explanations of ethical decisions and replacing single-point validation with pluralistic validation that accommodates different ethical viewpoints.