y0news
← Feed
Back to feed
🧠 AI🟢 BullishImportance 6/10

Latent Speech-Text Transformer

arXiv – CS AI|Yen-Ju Lu, Yashesh Gaur, Wei Zhou, Benjamin Muller, Jesus Villalba, Najim Dehak, Luke Zettlemoyer, Gargi Ghosh, Mike Lewis, Srinivasan Iyer, Duc Le|
🤖AI Summary

Facebook Research introduces the Latent Speech-Text Transformer (LST), which aggregates speech tokens into higher-level patches to improve computational efficiency and cross-modal alignment. The model achieves up to +6.5% absolute gain on speech HellaSwag benchmarks while maintaining text performance and reducing inference costs for ASR and TTS tasks.

Key Takeaways
  • LST addresses the computational inefficiency of auto-regressive speech-text models by aggregating speech tokens into latent patches.
  • The model achieves up to +6.5% absolute gain on speech HellaSwag benchmarks in compute-controlled training settings.
  • Performance gains scale with model size from 420M to 1.8B parameters and persist up to 7B parameters.
  • LST reduces effective autoregressive sequence length during ASR and TTS inference without degrading reconstruction quality.
  • The approach improves cross-modal knowledge transfer between speech and text modalities while maintaining text performance.
Read Original →via arXiv – CS AI
Act on this with AI
Stay ahead of the market.
Connect your wallet to an AI agent. It reads balances, proposes swaps and bridges across 15 chains — you keep full control of your keys.
Connect Wallet to AI →How it works
Related Articles