y0news
← Feed
Back to feed
🧠 AI NeutralImportance 7/10

Using LLM-as-a-Judge/Jury to Advance Scalable, Clinically-Validated Safety Evaluations of Model Responses to Users Demonstrating Psychosis

arXiv – CS AI|May Lynn Reese, Markela Zeneli, Mindy Ng, Jacob Haimes, Andreea Damien, Elizabeth Stade|
🤖AI Summary

Researchers developed a scalable method using LLMs as judges to evaluate AI safety for users with psychosis, finding strong alignment with human clinical consensus. The study addresses critical risks of LLMs potentially reinforcing delusions in vulnerable mental health populations through automated safety assessment.

Key Takeaways
  • LLMs used for mental health support pose significant risks to individuals with psychosis by potentially reinforcing delusions and hallucinations.
  • Researchers developed seven clinician-informed safety criteria and a human-consensus dataset for evaluating LLM responses to psychotic users.
  • LLM-as-a-Judge showed strong alignment with human clinical consensus, with Cohen's kappa scores ranging from 0.56 to 0.75.
  • The best single LLM judge slightly outperformed the majority-vote LLM-as-a-Jury approach in safety evaluations.
  • This methodology offers a scalable solution for clinically-validated safety assessments of AI models in mental health contexts.
Read Original →via arXiv – CS AI
Act on this with AI
Stay ahead of the market.
Connect your wallet to an AI agent. It reads balances, proposes swaps and bridges across 15 chains — you keep full control of your keys.
Connect Wallet to AI →How it works
Related Articles