y0news
#model-training4 articles
4 articles
AIBullisharXiv โ€“ CS AI ยท 4h ago2
๐Ÿง 

FedRot-LoRA: Mitigating Rotational Misalignment in Federated LoRA

Researchers propose FedRot-LoRA, a new framework that solves rotational misalignment issues in federated learning for large language models. The solution uses orthogonal transformations to align client updates before aggregation, improving training stability and performance without increasing communication costs.

AIBullisharXiv โ€“ CS AI ยท 4h ago7
๐Ÿง 

Controllable Reasoning Models Are Private Thinkers

Researchers developed a method to train AI reasoning models to follow privacy instructions in their internal reasoning traces, not just final answers. The approach uses separate LoRA adapters and achieves up to 51.9% improvement on privacy benchmarks, though with some trade-offs in task performance.

AIBullisharXiv โ€“ CS AI ยท 4h ago5
๐Ÿง 

Scaling Generalist Data-Analytic Agents

Researchers introduce DataMind, a new training framework for building open-source data-analytic AI agents that can handle complex, multi-step data analysis tasks. The DataMind-14B model achieves state-of-the-art performance with 71.16% average score, outperforming proprietary models like DeepSeek-V3.1 and GPT-5 on data analysis benchmarks.

AIBullisharXiv โ€“ CS AI ยท 4h ago9
๐Ÿง 

DUET: Distilled LLM Unlearning from an Efficiently Contextualized Teacher

Researchers propose DUET, a new distillation-based method for LLM unlearning that removes undesirable knowledge from AI models without full retraining. The technique combines computational efficiency with security advantages, achieving better performance in both knowledge removal and utility preservation while being significantly more data-efficient than existing methods.