19 articles tagged with #text-to-speech. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.
AIBullisharXiv – CS AI · Mar 37/103
🧠Researchers have released WAXAL, a large-scale multilingual speech dataset covering 24 Sub-Saharan African languages representing over 100 million speakers. The dataset includes 1,250 hours of transcribed speech for ASR and 235 hours of high-quality recordings for TTS, released under CC-BY-4.0 license to advance inclusive AI technologies.
AIBullishOpenAI News · Sep 227/106
🧠SchoolAI has deployed AI infrastructure powered by OpenAI's GPT-4.1, image generation, and text-to-speech technology to serve 1 million classrooms globally. The platform focuses on providing safe, teacher-supervised AI tools that enhance student engagement and enable personalized learning experiences.
AIBullishCrypto Briefing · 4d ago7/10
🧠ElevenLabs is advancing AI audio models that use neural networks to synthesize human-like speech, with implications for transforming business communication. The technology focuses on replicating natural speech patterns through sophisticated text-to-speech models, positioning the company at the forefront of conversational AI applications.
AIBullisharXiv – CS AI · 6d ago6/10
🧠Researchers introduce WAND, a framework that reduces computational and memory costs of autoregressive text-to-speech models by replacing full self-attention with windowed attention combined with knowledge distillation. The approach achieves up to 66.2% KV cache memory reduction while maintaining speech quality, addressing a critical scalability bottleneck in modern AR-TTS systems.
AINeutralarXiv – CS AI · Apr 106/10
🧠Researchers investigate in-context learning (ICL) in speech language models, revealing that speaking rate significantly affects model performance and acoustic mimicry, while induction heads play a causal role identical to text-based ICL. The study bridges the gap between text and speech domains by analyzing how models learn from demonstrations in text-to-speech tasks.
AIBullisharXiv – CS AI · Mar 276/10
🧠Voxtral TTS is a new multilingual text-to-speech AI model that can generate natural speech from just 3 seconds of reference audio. In human evaluations, it achieved a 68.4% win rate over ElevenLabs Flash v2.5 for voice cloning, demonstrating superior naturalness and expressivity.
AIBullishMarkTechPost · Mar 176/10
🧠Google AI has released WAXAL, an open multilingual speech dataset covering 24 African languages to improve Automatic Speech Recognition and Text-to-Speech systems. This addresses the significant data distribution problem where African languages remain poorly represented in speech technology training corpora.
🏢 Google
AIBullisharXiv – CS AI · Mar 176/10
🧠Researchers introduce SyncSpeech, a new text-to-speech model that combines autoregressive and non-autoregressive approaches using a Temporal Mask Transformer architecture. The model achieves 5.8x lower first-packet latency and 8.8x improved real-time performance while maintaining comparable speech quality to existing models.
AIBullisharXiv – CS AI · Mar 126/10
🧠Research demonstrates that LoRA fine-tuning of large language models significantly improves text-to-speech systems, achieving up to 0.42 DNS-MOS gains and 34% SNR improvements when training data has sufficient acoustic diversity. The study establishes LoRA as an effective mechanism for speaker adaptation in compact LLM-based TTS systems, outperforming frozen base models across perceptual quality, speaker fidelity, and signal quality metrics.
AIBullishMarkTechPost · Mar 116/10
🧠Fish Audio has released S2-Pro, a flagship Large Audio Model (LAM) that enables high-fidelity, multi-speaker text-to-speech synthesis with sub-150ms latency. The system features zero-shot voice cloning capabilities and granular emotion control, representing a shift from traditional modular TTS pipelines to integrated audio models.
AIBullisharXiv – CS AI · Mar 96/10
🧠Researchers introduce StreamWise, a system for real-time multi-modal content generation that can produce 10-minute podcast videos with sub-second startup delays. The system dynamically manages quality and resources across LLMs, text-to-speech, and video generation, costing under $25 for basic generation or $45 for high-quality real-time streaming.
AINeutralApple Machine Learning · Feb 256/103
🧠Research identifies a significant performance gap between speech-adapted Large Language Models and their text-based counterparts on language understanding tasks. Current approaches to bridge this gap rely on expensive large-scale speech synthesis methods, highlighting a key challenge in extending LLM capabilities to audio inputs.
AIBullishOpenAI News · Mar 206/106
🧠Developers can now access next-generation audio models through an API that includes advanced text-to-speech capabilities. The new models allow for instructional voice customization, enabling developers to specify speaking styles like 'sympathetic customer service agent' for enhanced voice agent applications.
AINeutralOpenAI News · Jun 75/107
🧠OpenAI provides technical insights into Voice Engine, their text-to-speech model technology, along with details about their safety research approach. The article explores the underlying technology and safety considerations for their voice synthesis capabilities.
AINeutralarXiv – CS AI · Apr 64/10
🧠Researchers developed a two-stage prompt selection strategy for zero-shot text-to-speech synthesis that improves emotional intensity and speaker consistency. The method evaluates prompts using prosodic features, audio quality, and text-emotion coherence in a static stage, then uses textual similarity for dynamic prompt selection during synthesis.
AINeutralarXiv – CS AI · Mar 174/10
🧠Researchers introduce NV-Bench, the first standardized benchmark for evaluating nonverbal vocalizations in text-to-speech systems. The benchmark includes 1,651 multilingual utterances across 14 categories and proposes new evaluation metrics that show strong correlation with human perception.
AINeutralarXiv – CS AI · Mar 54/10
🧠Researchers propose ZeSTA, a domain-conditioned training framework that improves personalized speech synthesis by better integrating synthetic and real speech data. The method addresses speaker similarity degradation issues when using zero-shot text-to-speech augmentation with limited real recordings.
AINeutralHugging Face Blog · Feb 275/104
🧠TTS Arena introduces a new benchmarking platform for evaluating text-to-speech models through community-driven comparisons in real-world scenarios. The platform aims to provide standardized evaluation metrics for TTS quality assessment across different models and use cases.
AINeutralHugging Face Blog · Aug 93/105
🧠The article appears to be about optimizing Bark, likely an AI text-to-speech model, using Hugging Face Transformers library. However, the article body is empty, making it impossible to provide specific details about the optimization techniques or results discussed.