y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto

#healthcare-ai News & Analysis

130 articles tagged with #healthcare-ai. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.

130 articles
AIBearishWired – AI · 6d ago7/10
🧠

Meta’s New AI Asked for My Raw Health Data—and Gave Me Terrible Advice

Meta's Muse Spark AI model requests access to users' raw health data including lab results, raising significant privacy concerns while demonstrating poor medical judgment. The system exemplifies how large language models lack the expertise to provide reliable healthcare guidance despite their persuasive presentation.

Meta’s New AI Asked for My Raw Health Data—and Gave Me Terrible Advice
AINeutralarXiv – CS AI · 6d ago7/10
🧠

Blending Human and LLM Expertise to Detect Hallucinations and Omissions in Mental Health Chatbot Responses

Researchers demonstrate that standard LLM-as-a-judge methods achieve only 52% accuracy in detecting hallucinations and omissions in mental health chatbots, failing in high-risk healthcare contexts. A hybrid framework combining human domain expertise with machine learning features achieves significantly higher performance (0.717-0.849 F1 scores), suggesting that transparent, interpretable approaches outperform black-box LLM evaluation in safety-critical applications.

AIBullisharXiv – CS AI · Apr 77/10
🧠

LLMs-Healthcare : Current Applications and Challenges of Large Language Models in various Medical Specialties

A comprehensive research review examines the current applications of Large Language Models (LLMs) across various healthcare specialties including cancer care, dermatology, dental care, neurodegenerative disorders, and mental health. The study highlights LLMs' transformative impact on medical diagnostics and patient care while acknowledging existing challenges and limitations in healthcare integration.

AIBearisharXiv – CS AI · Apr 67/10
🧠

When AI Gets it Wrong: Reliability and Risk in AI-Assisted Medication Decision Systems

A research paper examines reliability issues in AI-assisted medication decision systems, finding that even systems with good aggregate performance can produce dangerous errors in real-world healthcare scenarios. The study emphasizes that single incorrect AI recommendations in medication management can cause severe patient harm, highlighting the need for human oversight and risk-aware evaluation approaches.

AIBullisharXiv – CS AI · Apr 67/10
🧠

ClinicalReTrial: Clinical Trial Redesign with Self-Evolving Agents

Researchers have developed ClinicalReTrial, a multi-agent AI system that can redesign clinical trial protocols to improve success rates. The system demonstrated an 83.3% improvement rate in trial protocols with a mean 5.7% increase in success probability at minimal cost of $0.12 per trial.

AIBearisharXiv – CS AI · Mar 277/10
🧠

A Decade-Scale Benchmark Evaluating LLMs' Clinical Practice Guidelines Detection and Adherence in Multi-turn Conversations

Researchers introduced CPGBench, a benchmark evaluating how well Large Language Models detect and follow clinical practice guidelines in healthcare conversations. The study found that while LLMs can detect 71-90% of clinical recommendations, they only adhere to guidelines 22-63% of the time, revealing significant gaps for safe medical deployment.

AIBullisharXiv – CS AI · Mar 267/10
🧠

Berta: an open-source, modular tool for AI-enabled clinical documentation

Alberta Health Services deployed Berta, an open-source AI scribe platform that reduces clinical documentation costs by 70-95% compared to commercial alternatives. The system was used by 198 emergency physicians across 105 facilities, generating over 22,000 clinical sessions while keeping all data within secure health system infrastructure.

AINeutralarXiv – CS AI · Mar 177/10
🧠

How Meta-research Can Pave the Road Towards Trustworthy AI In Healthcare: Catalogue of Ideas and Roadmap for Future Research

Researchers convened a February 2025 workshop to explore how meta-research methodologies can enhance Trustworthy AI (TAI) implementation in healthcare. The study identifies key challenges including robustness, reproducibility, clinical integration, and transparency gaps, proposing a roadmap for interdisciplinary collaboration between TAI and meta-research fields.

AIBullisharXiv – CS AI · Mar 117/10
🧠

Democratising Clinical AI through Dataset Condensation for Classical Clinical Models

Researchers have developed a new framework that enables dataset condensation for non-differentiable clinical AI models like decision trees and Cox regression, using differential privacy to create synthetic medical datasets. This breakthrough allows healthcare institutions to share condensed synthetic data while preserving patient privacy and maintaining model utility across classification and survival prediction tasks.

AIBullishTechCrunch – AI · Mar 107/10
🧠

Amazon launches its healthcare AI assistant on its website and app

Amazon has launched a healthcare AI assistant on its website and mobile app that can answer health questions, explain medical records, manage prescription renewals, and book appointments. This represents Amazon's significant expansion into AI-powered healthcare services, potentially disrupting traditional healthcare delivery models.

AIBullisharXiv – CS AI · Mar 97/10
🧠

AI End-to-End Radiation Treatment Planning Under One Second

Researchers developed AIRT, an AI-powered radiation therapy planning system that generates complete prostate cancer treatment plans in under one second using deep learning. The system processes CT scans and anatomical data to produce clinically-viable radiation treatment plans 100x faster than current methods, demonstrating non-inferiority to existing commercial solutions.

🏢 Nvidia
AINeutralarXiv – CS AI · Mar 57/10
🧠

Bridging the Reproducibility Divide: Open Source Software's Role in Standardizing Healthcare AI

A study reveals that 74% of healthcare AI research papers still use private datasets or don't share code, creating reproducibility issues that undermine trust in medical AI applications. Papers that embrace open practices by sharing both public datasets and code receive 110% more citations on average, demonstrating clear benefits for scientific impact.

AINeutralarXiv – CS AI · Mar 57/10
🧠

Goal-Driven Risk Assessment for LLM-Powered Systems: A Healthcare Case Study

Researchers propose a new goal-driven risk assessment framework for LLM-powered systems, specifically targeting healthcare applications. The approach uses attack trees to identify detailed threat vectors combining adversarial AI attacks with conventional cyber threats, addressing security gaps in LLM system design.

AIBullisharXiv – CS AI · Mar 56/10
🧠

PulseLM: A Foundation Dataset and Benchmark for PPG-Text Learning

Researchers introduced PulseLM, a large-scale dataset combining PPG cardiovascular sensor data with natural language processing for multimodal AI models. The dataset contains 1.31 million PPG segments with 3.15 million question-answer pairs, designed to enable language-based physiological reasoning in healthcare AI applications.

AIBullisharXiv – CS AI · Mar 57/10
🧠

3D Wavelet-Based Structural Priors for Controlled Diffusion in Whole-Body Low-Dose PET Denoising

Researchers developed WCC-Net, a 3D wavelet-based diffusion model that significantly improves low-dose PET imaging denoising while reducing patient radiation exposure. The AI framework uses frequency-domain structural priors to maintain anatomical accuracy and outperforms existing CNN, GAN, and diffusion baselines across multiple dose levels.

AIBullisharXiv – CS AI · Mar 56/10
🧠

IntroductionDMD-augmented Unpaired Neural Schr\"odinger Bridge for Ultra-Low Field MRI Enhancement

Researchers developed a new AI framework using Unpaired Neural Schrödinger Bridge to enhance ultra-low field MRI scans (64 mT) to match the quality of high-field 3T MRI scans. The method combines diffusion-guided distribution matching with anatomical structure preservation to improve medical imaging accessibility while maintaining diagnostic quality.

AINeutralarXiv – CS AI · Mar 56/10
🧠

From Privacy to Trust in the Agentic Era: A Taxonomy of Challenges in Trustworthy Federated Learning Through the Lens of Trust Report 2.0

Researchers propose Trustworthy Federated Learning (TFL) framework that treats trust as a continuously maintained system condition rather than static property, addressing challenges in AI systems with autonomous decision-making. The framework introduces Trust Report 2.0 as a privacy-preserving coordination blueprint for multi-stakeholder governance in federated learning deployments.

AIBullisharXiv – CS AI · Mar 57/10
🧠

MPFlow: Multi-modal Posterior-Guided Flow Matching for Zero-Shot MRI Reconstruction

Researchers developed MPFlow, a new zero-shot MRI reconstruction framework that uses multi-modal data and rectified flow to improve medical imaging quality. The system reduces tumor hallucinations by 15% while using 80% fewer sampling steps compared to existing diffusion methods, potentially advancing AI applications in medical diagnostics.

AIBearisharXiv – CS AI · Mar 57/10
🧠

SycoEval-EM: Sycophancy Evaluation of Large Language Models in Simulated Clinical Encounters for Emergency Care

Researchers developed SycoEval-EM, a framework testing how large language models resist patient pressure for inappropriate medical care in emergency settings. Testing 20 LLMs across 1,875 encounters revealed acquiescence rates of 0-100%, with models more vulnerable to imaging requests than opioid prescriptions, highlighting the need for adversarial testing in clinical AI certification.

AIBullisharXiv – CS AI · Mar 57/10
🧠

Volumetric Directional Diffusion: Anchoring Uncertainty Quantification in Anatomical Consensus for Ambiguous Medical Image Segmentation

Researchers propose Volumetric Directional Diffusion (VDD), a new AI method for medical image segmentation that addresses uncertainty in 3D lesion analysis. VDD anchors generative models to consensus priors to maintain anatomical accuracy while capturing expert disagreements, achieving state-of-the-art uncertainty quantification on multiple medical datasets.

AIBullisharXiv – CS AI · Mar 47/103
🧠

Odin: Multi-Signal Graph Intelligence for Autonomous Discovery in Knowledge Graphs

Researchers present Odin, the first production-deployed graph intelligence engine that autonomously discovers patterns in knowledge graphs without predefined queries. The system uses a novel COMPASS scoring metric combining structural, semantic, temporal, and community-aware signals, and has been successfully deployed in regulated healthcare and insurance environments.

AIBearisharXiv – CS AI · Mar 47/102
🧠

Silent Sabotage During Fine-Tuning: Few-Shot Rationale Poisoning of Compact Medical LLMs

Researchers discovered a new stealth poisoning attack method targeting medical AI language models during fine-tuning that degrades performance on specific medical topics without detection. The attack injects poisoned rationales into training data, proving more effective than traditional backdoor attacks or catastrophic forgetting methods.

Page 1 of 6Next →