y0news
#small-models1 article
1 articles
AIBullisharXiv โ€“ CS AI ยท 4h ago10
๐Ÿง 

MobileLLM-R1: Exploring the Limits of Sub-Billion Language Model Reasoners with Open Training Recipes

Researchers developed MobileLLM-R1, a sub-billion parameter AI model that demonstrates strong reasoning capabilities using only 2T tokens of high-quality data instead of massive 10T+ token datasets. The 950M parameter model achieves superior performance on reasoning benchmarks compared to larger competitors while using only 11.7% of the training data compared to proprietary models like Qwen3.