y0news
← Feed
Back to feed
🧠 AI🔴 BearishImportance 7/10

TrustMH-Bench: A Comprehensive Benchmark for Evaluating the Trustworthiness of Large Language Models in Mental Health

arXiv – CS AI|Zixin Xiong, Ziteng Wang, Haotian Fan, Xinjie Zhang, Wenxuan Wang||2 views
🤖AI Summary

Researchers have developed TrustMH-Bench, a comprehensive framework to evaluate the trustworthiness of Large Language Models (LLMs) in mental health applications. Testing revealed that both general-purpose and specialized mental health LLMs, including advanced models like GPT-5.1, significantly underperform across critical trustworthiness dimensions in mental health scenarios.

Key Takeaways
  • TrustMH-Bench evaluates LLMs across eight core pillars including reliability, crisis identification, safety, fairness, privacy, robustness, anti-sycophancy, and ethics.
  • Extensive testing of six general-purpose and six specialized mental health LLMs revealed significant deficiencies across various trustworthiness dimensions.
  • Even advanced models like GPT-5.1 fail to maintain consistently high performance across all trustworthiness dimensions in mental health contexts.
  • The research highlights critical gaps in current LLM capabilities for high-stakes, safety-sensitive mental health applications.
  • The framework establishes a systematic approach to quantify trustworthiness specifically for mental health AI applications.
Read Original →via arXiv – CS AI
Act on this with AI
Stay ahead of the market.
Connect your wallet to an AI agent. It reads balances, proposes swaps and bridges across 15 chains — you keep full control of your keys.
Connect Wallet to AI →How it works
Related Articles