y0news
AnalyticsDigestsRSSAICrypto
#poisoning-attacks1 article
1 articles
AIBearisharXiv โ€“ CS AI ยท 5h ago1
๐Ÿง 

Silent Sabotage During Fine-Tuning: Few-Shot Rationale Poisoning of Compact Medical LLMs

Researchers discovered a new stealth poisoning attack method targeting medical AI language models during fine-tuning that degrades performance on specific medical topics without detection. The attack injects poisoned rationales into training data, proving more effective than traditional backdoor attacks or catastrophic forgetting methods.