y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto

#healthcare-ai News & Analysis

140 articles tagged with #healthcare-ai. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.

140 articles
AIBearishcrypto.news · 5d ago6/10
🧠

AI Therapy Chatbots Face Growing State Bans as Maine Advances Bill and Missouri Follows

Maine and Missouri are advancing legislative bans on AI therapy chatbots, reflecting growing state-level regulatory skepticism toward AI-driven mental health services. This trend signals potential restrictions on a developing sector, though the movement remains fragmented across individual states without federal coordination.

AI Therapy Chatbots Face Growing State Bans as Maine Advances Bill and Missouri Follows
AINeutralarXiv – CS AI · 6d ago6/10
🧠

Large Language Models for Outpatient Referral: Problem Definition, Benchmarking and Challenges

Researchers have developed a comprehensive evaluation framework for Large Language Models applied to outpatient referral systems in healthcare, revealing that LLMs offer limited advantages over simpler BERT-like models in static referral tasks but demonstrate potential in interactive dialogue scenarios. The study addresses the absence of standardized evaluation criteria for assessing LLM effectiveness in dynamic healthcare settings.

AIBullisharXiv – CS AI · Mar 276/10
🧠

DeepFAN, a transformer-based deep learning model for human-artificial intelligence collaborative assessment of incidental pulmonary nodules in CT scans: a multi-reader, multi-case trial

DeepFAN, a transformer-based AI model, achieved 93.9% diagnostic accuracy for lung nodule classification and significantly improved junior radiologists' performance by 10.9% in clinical trials. The model was trained on over 10,000 pathology-confirmed nodules and validated across 400 cases at three medical institutions.

🏢 Meta
AIBullisharXiv – CS AI · Mar 266/10
🧠

PLACID: Privacy-preserving Large language models for Acronym Clinical Inference and Disambiguation

Researchers developed PLACID, a privacy-preserving system using small on-device AI models (2B-10B parameters) for clinical acronym disambiguation in healthcare settings. The cascaded approach combines general-purpose models for detection with domain-specific biomedical models, achieving 81% expansion accuracy while keeping sensitive health data local.

AIBullisharXiv – CS AI · Mar 176/10
🧠

Ethical Fairness without Demographics in Human-Centered AI

Researchers introduce Flare, a new AI fairness framework that ensures ethical outcomes without requiring demographic data, addressing privacy and regulatory concerns in human-centered AI applications. The system uses Fisher Information to detect hidden biases and includes a novel evaluation metric suite called BHE for measuring ethical fairness beyond traditional statistical measures.

🏢 Meta
AINeutralarXiv – CS AI · Mar 176/10
🧠

Concisely Explaining the Doubt: Minimum-Size Abductive Explanations for Linear Models with a Reject Option

Researchers developed a method to compute minimum-size abductive explanations for AI linear models with reject options, addressing a key challenge in explainable AI for critical domains. The approach uses log-linear algorithms for accepted instances and integer linear programming for rejected instances, proving more efficient than existing methods despite theoretical NP-hardness.

AIBearisharXiv – CS AI · Mar 176/10
🧠

HEARTS: Benchmarking LLM Reasoning on Health Time Series

Researchers introduce HEARTS, a comprehensive benchmark for evaluating large language models' ability to reason over health time series data across 16 datasets and 12 health domains. The study reveals that current LLMs significantly underperform compared to specialized models and struggle with multi-step temporal reasoning in healthcare applications.

AIBullisharXiv – CS AI · Mar 176/10
🧠

EviAgent: Evidence-Driven Agent for Radiology Report Generation

Researchers introduce EviAgent, a new AI system for automated radiology report generation that provides transparent, evidence-driven analysis. The system addresses key limitations of current medical AI models by offering traceable decision-making and integrating external domain knowledge, outperforming existing specialized medical models in testing.

AIBullisharXiv – CS AI · Mar 176/10
🧠

Argumentation for Explainable and Globally Contestable Decision Support with LLMs

Researchers introduce ArgEval, a new framework that enhances Large Language Model decision-making through structured argumentation and global contestability. Unlike previous approaches limited to binary choices and local corrections, ArgEval maps entire decision spaces and builds reusable argumentation frameworks that can be globally modified to prevent repeated mistakes.

AIBullisharXiv – CS AI · Mar 176/10
🧠

OpenHospital: A Thing-in-itself Arena for Evolving and Benchmarking LLM-based Collective Intelligence

Researchers introduce OpenHospital, a new interactive arena designed to develop and benchmark Large Language Model-based Collective Intelligence through physician-patient agent interactions. The platform uses a data-in-agent-self paradigm to rapidly enhance AI agent capabilities while providing evaluation metrics for medical proficiency and system efficiency.

AIBullisharXiv – CS AI · Mar 176/10
🧠

PREBA: Surgical Duration Prediction via PCA-Weighted Retrieval-Augmented LLMs and Bayesian Averaging Aggregation

Researchers developed PREBA, a retrieval-augmented framework that uses PCA-weighted retrieval and Bayesian averaging to improve surgical duration prediction accuracy by up to 40% using large language models. The system grounds LLM predictions in institution-specific clinical data without requiring computationally intensive training, achieving performance competitive with supervised machine learning methods.

AIBullisharXiv – CS AI · Mar 166/10
🧠

Delta1 with LLM: symbolic and neural integration for credible and explainable reasoning

Researchers introduce Delta1, a framework that integrates automated theorem generation with large language models to create explainable AI reasoning. The system combines formal logic rigor with natural language explanations, demonstrating applications across healthcare, compliance, and regulatory domains.

AIBullisharXiv – CS AI · Mar 126/10
🧠

Emulating Clinician Cognition via Self-Evolving Deep Clinical Research

Researchers developed DxEvolve, a self-evolving AI diagnostic system that mimics clinical reasoning through interactive workflows and continuous learning. The system achieved 90.4% diagnostic accuracy on benchmarks, comparable to human clinicians at 88.8%, and showed significant improvements over traditional AI models.

AIBearisharXiv – CS AI · Mar 116/10
🧠

Investigating Gender Stereotypes in Large Language Models via Social Determinants of Health

A new research study reveals that Large Language Models (LLMs) propagate gender stereotypes and biases when processing healthcare data, particularly through interactions between gender and social determinants of health. The research used French patient records to demonstrate how LLMs rely on embedded stereotypes to make gendered decisions in healthcare contexts.

AIBullishMIT News – AI · Mar 96/10
🧠

Improving AI models’ ability to explain their predictions

Researchers have developed a new approach to improve AI models' ability to explain their predictions, which could help users determine whether to trust model outputs. This advancement is particularly important for safety-critical applications such as healthcare and autonomous driving where understanding AI decision-making is crucial.

Improving AI models’ ability to explain their predictions
AIBullishTechCrunch – AI · Mar 56/10
🧠

AWS launches a new AI agent platform specifically for health care

AWS has launched Amazon Connect Health, a new AI agent platform designed specifically for healthcare applications. The platform focuses on automating key healthcare processes including patient scheduling, documentation, and patient verification tasks.

AIBullisharXiv – CS AI · Mar 55/10
🧠

HealthMamba: An Uncertainty-aware Spatiotemporal Graph State Space Model for Effective and Reliable Healthcare Facility Visit Prediction

Researchers have developed HealthMamba, a new AI framework that uses spatiotemporal modeling and uncertainty quantification to predict healthcare facility visits more accurately. The system achieved 6% better prediction accuracy and 3.5% improvement in uncertainty quantification compared to existing methods when tested on real-world datasets from four US states.

AIBullisharXiv – CS AI · Mar 36/106
🧠

Linking Knowledge to Care: Knowledge Graph-Augmented Medical Follow-Up Question Generation

Researchers developed KG-Followup, a knowledge graph-augmented large language model system that generates medical follow-up questions for pre-diagnostic assessment. The system combines structured medical domain knowledge with LLMs to improve clinical diagnosis efficiency, outperforming existing methods by 5-8% in recall benchmarks.

AINeutralarXiv – CS AI · Mar 37/106
🧠

ProtRLSearch: A Multi-Round Multimodal Protein Search Agent with Large Language Models Trained via Reinforcement Learning

Researchers introduce ProtRLSearch, a multi-round protein search agent that uses reinforcement learning and multimodal inputs (protein sequences and text) to improve protein analysis for healthcare applications. The system addresses limitations of single-round, text-only protein search agents and includes a new benchmark called ProtMCQs with 3,000 multiple choice questions for evaluation.

AIBullisharXiv – CS AI · Mar 37/107
🧠

Multimodal Mixture-of-Experts with Retrieval Augmentation for Protein Active Site Identification

Researchers introduce MERA (Multimodal Mixture-of-Experts with Retrieval Augmentation), a new AI framework for protein active site identification that addresses challenges in drug discovery. The system achieves 90% AUPRC performance on active site prediction through hierarchical multi-expert retrieval and reliability-aware fusion strategies.

AINeutralarXiv – CS AI · Mar 37/106
🧠

Identifying and Characterising Response in Clinical Trials: Development and Validation of a Machine Learning Approach in Colorectal Cancer

Researchers developed a machine learning approach combining Virtual Twins method with survLIME to identify patient subgroups who respond differently to treatments in clinical trials. The method achieved 0.77 AUC for identifying treatment responders in colorectal cancer trials, finding genetic mutations, metastasis sites, and ethnicity as key response factors.

$CRV
← PrevPage 3 of 6Next →