y0news
← Feed
Back to feed
🧠 AI🟢 Bullish

OSF: On Pre-training and Scaling of Sleep Foundation Models

arXiv – CS AI|Zitao Shuai, Zongzhe Xu, David Yang, Wei Wang, Yuzhe Yang||2 views
🤖AI Summary

Researchers developed OSF, a family of sleep foundation models trained on 166,500 hours of sleep data from nine public sources. The study reveals key insights about scaling and pre-training for sleep AI models, achieving state-of-the-art performance across nine datasets for sleep and disease prediction tasks.

Key Takeaways
  • OSF models were trained on a massive dataset of 166,500 hours of sleep recordings from nine public sources.
  • Existing foundation models fail to generalize when certain data channels are missing during inference.
  • Channel-invariant feature learning is essential for effective pre-training of sleep models.
  • Scaling sample size, model capacity, and multi-source data mixture consistently improves downstream performance.
  • OSF achieves state-of-the-art results across nine datasets for sleep and disease prediction tasks.
Read Original →via arXiv – CS AI
Act on this with AI
Stay ahead of the market.
Connect your wallet to an AI agent. It reads balances, proposes swaps and bridges across 15 chains — you keep full control of your keys.
Connect Wallet to AI →How it works
Related Articles