AIBearisharXiv – CS AI · 6h ago7/10
🧠
Evaluating Explainability in Safety-Critical ATR Systems: Limitations of Post-Hoc Methods and Paths Toward Robust XAI
A peer-reviewed study evaluates explainability methods in AI systems used for automatic target recognition in safety-critical applications, revealing that popular post-hoc explanation techniques have significant limitations including spurious explanations and vulnerability to manipulation. The research argues that current XAI approaches are insufficient for deployment in high-stakes environments and calls for more robust, causally-grounded methods that prioritize system assurance over visual plausibility.