y0news
AnalyticsDigestsSourcesRSSAICrypto
#compute-efficiency3 articles
3 articles
AIBullisharXiv โ€“ CS AI ยท 5d ago7/104
๐Ÿง 

Scaling with Collapse: Efficient and Predictable Training of LLM Families

Researchers demonstrate that training loss curves for large language models can collapse onto universal trajectories when hyperparameters are optimally set, enabling more efficient LLM training. They introduce Celerity, a competitive LLM family developed using these insights, and show that deviation from collapse can serve as an early diagnostic for training issues.

AIBullisharXiv โ€“ CS AI ยท Feb 277/105
๐Ÿง 

Compute-Optimal Quantization-Aware Training

Researchers developed a new approach to quantization-aware training (QAT) that optimizes compute allocation between full-precision and quantized training phases. They discovered that contrary to previous findings, the optimal ratio of QAT to full-precision training increases with total compute budget, and derived scaling laws to predict optimal configurations across different model sizes and bit widths.

AIBullishOpenAI News ยท May 57/104
๐Ÿง 

AI and efficiency

A new analysis reveals that compute requirements for training neural networks to match ImageNet classification performance have decreased by 50% every 16 months since 2012. Training a network to AlexNet-level performance now requires 44 times less compute than in 2012, far outpacing Moore's Law improvements which would only yield 11x cost reduction over the same period.