y0news
← Feed
Back to feed
🧠 AI NeutralImportance 6/10

CoAX: Cognitive-Oriented Attribution eXplanation User Model of Human Understanding of AI Explanations

arXiv – CS AI|Louth Bin Rawshan, Zhuoyu Wang, Brian Y. Lim|
🤖AI Summary

Researchers developed CoAX, a cognitive modeling framework that analyzes how users understand and interpret AI explanations (XAI) when making decisions about tabular data. By studying human reasoning strategies across different explanation methods, the team found that cognitive models better predict human decision-making than traditional machine learning proxies, offering insights to improve the design of more usable AI explanations.

Analysis

The gap between XAI innovation and actual user comprehension represents a critical challenge in deploying trustworthy AI systems. CoAX addresses this by bridging cognitive science and explainability research, moving beyond technical explanations to understand how humans actually process and act on AI-generated insights. This work reveals that the reasoning strategies users employ when reviewing feature importance or attribution-based explanations vary significantly, and not all strategies prove equally effective for forward simulation tasks.

The research emerges from growing recognition that explainability alone doesn't guarantee usability. Previous studies showed users often misinterpret or misapply AI explanations despite their availability, suggesting explanations are poorly matched to human cognitive processes. CoAX's formative and summative study design provides empirical grounding for understanding these failures, using cognitive modeling to reverse-engineer which mental processes align with successful decision-making.

For AI developers and product teams, this work offers a systematic methodology for debugging explanation interfaces before costly real-world deployments. Rather than iterating with hundreds of users, fitted cognitive models can simulate decision-making under varied explanation conditions, dramatically reducing development cycles. Organizations building explainability features into financial platforms, healthcare systems, or other high-stakes domains can leverage these insights to prioritize explanation formats that actually enhance human judgment.

Future research should extend this framework beyond tabular data to include image, text, and complex time-series explanations where cognitive load and comprehension challenges intensify significantly.

Key Takeaways
  • Cognitive modeling outperforms traditional ML baselines at predicting how humans use AI explanations for decision-making
  • Different XAI methods (feature importance, attribution) trigger distinct reasoning strategies with varying effectiveness
  • Fitted cognitive models enable researchers to study expensive user research questions computationally without human participants
  • Current XAI approaches fail to account for how human cognition actually processes explanation information
  • This framework provides actionable methodology for improving explainability interface design in production AI systems
Read Original →via arXiv – CS AI
Act on this with AI
Stay ahead of the market.
Connect your wallet to an AI agent. It reads balances, proposes swaps and bridges across 15 chains — you keep full control of your keys.
Connect Wallet to AI →How it works
Related Articles