y0news
← Feed
Back to feed
🧠 AI🟢 BullishImportance 7/10

Adoption and Use of LLMs at an Academic Medical Center

arXiv – CS AI|Nigam H. Shah, Nerissa Ambers, Abby Pandya, Timothy Keyes, Juan M. Banda, Srikar Nallan, Carlene Lugtu, Artem A. Trotsyuk, Suhana Bedi, Alyssa Unell, Miguel Fuentes, Francois Grolleau, Sneha S. Jain, Jonathan Chen, Devdutta Dash, Danton Char, Aditya Sharma, Duncan McElfresh, Patrick Scully, Vishanthan Kumar, Clancy Dennis, Connor OBrien, Satchi Mouniswamy, Elvis Jones, Krishna Jasti, Gunavathi Mannika Lakshmanan, Sree Ram Akula, Varun Kumar Singh, Ramesh Rajmanickam, Sudhir Sinha, Vicky Zhou, Xu Wang, Bilal Mawji, Joshua Ge, Wencheng Li, Travis Lyons, Jarrod Helzer, Vikas Kakkar, Ramesh Powar, Darren Batara, Cheryl Cordova, William Frederick III, Olivia Tang, Phoebe Morgan, April S. Liang, Stephen P. Ma, Shivam Vedak, Dong-han Yao, Akshay Swaminathan, Mehr Kashyap, Brian Ng, Jamie Hellman, Nikesh Kotecha, Christopher Sharp, Gretchen Brown, Christian Lindmark, Anurang Revri, Michael A. Pfeffer|
🤖AI Summary

Researchers at an academic medical center developed ChatEHR, an LLM system integrated into electronic health records that enables both automated clinical tasks and interactive use across patient timelines. Over 1.5 years, the platform achieved adoption by 1,075 users conducting 23,000 sessions, generating an estimated $6M in first-year savings while maintaining vendor-agnostic governance.

Analysis

ChatEHR represents a pragmatic institutional approach to LLM deployment in healthcare, addressing a critical gap between LLM capability and clinical workflow reality. Traditional standalone tools fail because they impose manual data entry friction; this system solves that by integrating directly with existing EHR infrastructure and patient data spanning years. The deployment demonstrates that LLM value emerges not from raw model performance but from reducing friction at the human-technology interface.

The adoption metrics reveal substantial institutional momentum: 1,075 trained users generating 23,000 sessions in three months indicates genuine workflow integration rather than pilot-phase experimentation. However, the reported hallucination rate of 0.73 per generation and 1.60 inaccuracies per generation expose persistent reliability challenges in clinical contexts where errors carry stakes. The team's finding that benchmark-based evaluations prove insufficient mirrors broader industry recognition that standardized metrics fail to capture real-world performance variability.

The financial impact framework reveals sophisticated institutional thinking. Rather than pursuing vendor lock-in, the medical center deliberately built a model-agnostic platform, enabling comparative evaluation of different LLM providers against specific clinical tasks. This approach distributes power toward the institution rather than LLM vendors, critical in healthcare where data governance and clinical autonomy carry regulatory and ethical weight.

Looking forward, this model suggests healthcare institutions will increasingly build internal LLM capabilities rather than purchasing turnkey solutions. The success hinges on continuous monitoring methodologies and value assessment frameworks that capture both cost savings and quality improvements. Institutions replicating this approach will drive market consolidation away from consumer-facing LLM products toward specialized enterprise infrastructure.

Key Takeaways
  • ChatEHR achieved 1,075 active users and 23,000 sessions within three months by eliminating manual data entry friction through direct EHR integration.
  • The platform generated $6M estimated first-year savings through cost reduction, time savings, and revenue growth from improved clinical workflows.
  • Hallucination and accuracy challenges persist despite institutional deployment, requiring new monitoring methods beyond standard LLM benchmarks.
  • Model-agnostic architecture enables institutions to match different LLM providers to specific clinical tasks rather than committing to single vendors.
  • Internal build-from-within strategy demonstrates healthcare organizations can maintain governance control and clinical agency in LLM implementation.
Read Original →via arXiv – CS AI
Act on this with AI
Stay ahead of the market.
Connect your wallet to an AI agent. It reads balances, proposes swaps and bridges across 15 chains — you keep full control of your keys.
Connect Wallet to AI →How it works
Related Articles