y0news
#llm-training2 articles
2 articles
AIBullisharXiv โ€“ CS AI ยท 6h ago10
๐Ÿง 

RE-PO: Robust Enhanced Policy Optimization as a General Framework for LLM Alignment

Researchers introduce RE-PO (Robust Enhanced Policy Optimization), a new framework that addresses noise in human preference data used to train large language models. The method uses expectation-maximization to identify unreliable labels and reweight training data, improving alignment algorithm performance by up to 7% on benchmarks.

$LINK
AIBullisharXiv โ€“ CS AI ยท 6h ago7
๐Ÿง 

Smoothing DiLoCo with Primal Averaging for Faster Training of LLMs

Researchers propose Generalized Primal Averaging (GPA), a new optimization method that improves training speed for large language models by 8-10% over standard AdamW while using less memory. GPA unifies and enhances existing averaging-based optimizers like DiLoCo by enabling smooth iterate averaging at every step without complex two-loop structures.