New AI-Driven Tools for Enhancing Campus Well-being: A Prevention and Intervention Approach
Researchers have developed an integrated AI framework for campus mental health monitoring, combining TigerGPT (an LLM-powered survey chatbot) for prevention and PsychoGPT (a DSM-5-aligned screening tool) for intervention. The system uses reinforcement learning and multi-model reasoning to improve feedback quality and reduce hallucinations in mental health assessment.
This dissertation presents a comprehensive approach to campus mental health through AI-driven tools that address a critical gap in university mental health infrastructure. The work demonstrates practical applications of large language models in sensitive healthcare contexts, where accuracy and reliability are paramount. TigerGPT's achievement of 81% satisfaction rates reflects growing acceptance of conversational AI for health surveys, while AURA's reinforcement learning framework shows how adaptive systems can improve data quality without user friction.
The intervention component represents a more technically ambitious application. PsychoGPT's integration with DSM-5 and PHQ-8 guidelines grounds the system in established clinical standards, addressing a key challenge in healthcare AI: maintaining alignment with professional diagnostic criteria. The introduction of Stacked Multi-Model Reasoning specifically targets hallucination reduction—a persistent problem in LLM applications where false information can have serious consequences in mental health contexts.
For the broader AI healthcare ecosystem, this work validates that specialized, clinically-grounded LLMs can outperform general-purpose models on domain-specific tasks. The modular framework architecture, where prevention tools feed into intervention models, suggests a scalable template for other institutional health applications. However, real-world deployment would require regulatory compliance, clinician validation, and careful consideration of liability and data privacy. The academic research demonstrates technical feasibility, but institutional adoption hinges on whether universities can navigate clinical validation requirements and liability frameworks.
- →AURA's reinforcement learning framework improved survey quality by 0.12 mean gain while reducing specification prompts by 63%, demonstrating adaptive AI can enhance health feedback collection.
- →Stacked Multi-Model Reasoning outperforms single-model approaches on mental health screening by layering expert models to reduce hallucinations in clinical assessments.
- →LLM-based survey chatbots achieved 81% satisfaction rates with 75% usability, indicating user acceptance of conversational AI for sensitive health monitoring.
- →Clinical grounding in DSM-5 and PHQ-8 standards enables LLMs to perform explainable symptom-level scoring and distress classification without heavy reliance on keyword matching.
- →Integrated prevention-intervention frameworks show potential for institutional deployment but require clinical validation and regulatory compliance before widespread adoption.