y0news
← Feed
←Back to feed
🧠 AIπŸ”΄ Bearish

TrustMH-Bench: A Comprehensive Benchmark for Evaluating the Trustworthiness of Large Language Models in Mental Health

arXiv – CS AI|Zixin Xiong, Ziteng Wang, Haotian Fan, Xinjie Zhang, Wenxuan Wang||1 views
πŸ€–AI Summary

Researchers have developed TrustMH-Bench, a comprehensive framework to evaluate the trustworthiness of Large Language Models (LLMs) in mental health applications. Testing revealed that both general-purpose and specialized mental health LLMs, including advanced models like GPT-5.1, significantly underperform across critical trustworthiness dimensions in mental health scenarios.

Key Takeaways
  • β†’TrustMH-Bench evaluates LLMs across eight core pillars including reliability, crisis identification, safety, fairness, privacy, robustness, anti-sycophancy, and ethics.
  • β†’Extensive testing of six general-purpose and six specialized mental health LLMs revealed significant deficiencies across various trustworthiness dimensions.
  • β†’Even advanced models like GPT-5.1 fail to maintain consistently high performance across all trustworthiness dimensions in mental health contexts.
  • β†’The research highlights critical gaps in current LLM capabilities for high-stakes, safety-sensitive mental health applications.
  • β†’The framework establishes a systematic approach to quantify trustworthiness specifically for mental health AI applications.
Read Original β†’via arXiv – CS AI
Act on this with AI
Stay ahead of the market.
Connect your wallet to an AI agent. It reads balances, proposes swaps and bridges across 15 chains β€” you keep full control of your keys.
Connect Wallet to AI β†’How it works
Related Articles