y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto

#generative-ai News & Analysis

223 articles tagged with #generative-ai. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.

223 articles
AIBullisharXiv – CS AI · Feb 276/105
🧠

BetterScene: 3D Scene Synthesis with Representation-Aligned Generative Model

BetterScene is a new AI approach that enhances 3D scene synthesis and novel view generation from sparse photos by leveraging Stable Video Diffusion with improved regularization techniques. The method integrates 3D Gaussian Splatting and addresses consistency issues in existing diffusion-based solutions through temporal equivariance and vision foundation model alignment.

$RNDR
AIBullishTechCrunch – AI · Feb 266/103
🧠

Google launches Nano Banana 2 model with faster image generation

Google has launched Nano Banana 2, a new AI model featuring faster image generation capabilities. The model is being integrated as the default in Google's Gemini app and AI mode, representing a significant update to Google's AI infrastructure.

AIBullishMIT News – AI · Feb 255/106
🧠

Mixing generative AI with physics to create personal items that work in the real world

Researchers have developed PhysiOpt, a system that combines generative AI with physics simulations to create 3D blueprints for real-world accessories and decor items. The system enhances AI-generated designs by running physics simulations and making subtle adjustments to ensure the items are durable and functional in practical applications.

AIBullishGoogle DeepMind Blog · Feb 186/106
🧠

A new way to express yourself: Gemini can now create music

Google's Gemini app has integrated Lyria 3, its most advanced music generation model, allowing users to create 30-second music tracks from text or image inputs. This feature democratizes music creation by making AI-powered composition accessible to anyone through the Gemini interface.

AINeutralGoogle Research Blog · Jan 276/105
🧠

ATLAS: Practical scaling laws for multilingual models

ATLAS presents new scaling laws for multilingual generative AI models, providing practical frameworks for understanding how model performance scales across different languages and model sizes. This research offers valuable insights for optimizing multilingual AI system development and deployment strategies.

AINeutralIEEE Spectrum – AI · Dec 316/105
🧠

The Top 6 AI Stories of 2025

IEEE Spectrum's analysis of 2025's top AI stories reveals a year of maturation rather than hype, with generative AI moving from novelty to routine use while facing growing scrutiny over environmental costs, reliability issues, and practical limitations. The coverage highlights both breakthrough applications in areas like weather forecasting and coding assistance, as well as persistent challenges including water consumption, different failure modes compared to human errors, and the proliferation of AI-generated content.

AIBullishMicrosoft Research Blog · Dec 106/103
🧠

Promptions helps make AI prompting more precise with dynamic UI controls

Microsoft Research introduces Promptions, a tool that helps developers add dynamic UI controls to chat interfaces for more precise AI prompting. The system allows users to guide generative AI responses through intuitive controls rather than complex written instructions.

AIBullishGoogle Research Blog · Dec 46/107
🧠

Titans + MIRAS: Helping AI have long-term memory

The article discusses Titans + MIRAS technology designed to provide AI systems with long-term memory capabilities. This development aims to address current limitations in AI memory retention and could enhance AI performance across various applications.

AIBullishGoogle DeepMind Blog · Nov 105/106
🧠

How AI is giving Northern Ireland teachers time back

A six-month pilot program with Northern Ireland's Education Authority found that integrating Gemini and other generative AI tools saved participating teachers an average of 10 hours per week. The study demonstrates practical AI implementation in education, showing significant time savings for administrative and teaching tasks.

AIBullishGoogle Research Blog · Sep 236/105
🧠

Time series foundation models can be few-shot learners

The article discusses advancements in time series foundation models and their capability for few-shot learning in generative AI applications. These models can learn patterns from limited data samples, potentially improving forecasting and prediction tasks across various domains.

AIBullishHugging Face Blog · Aug 136/107
🧠

Arm & ExecuTorch 0.7: Bringing Generative AI to the masses

The article title suggests coverage of Arm processors and ExecuTorch 0.7 framework aimed at democratizing generative AI accessibility. However, the article body appears to be empty, preventing detailed analysis of the technical developments or market implications.

AIBullishGoogle Research Blog · Jul 286/107
🧠

SensorLM: Learning the language of wearable sensors

SensorLM represents a breakthrough in generative AI applied to wearable sensor data, enabling AI systems to understand and process the complex language of sensor inputs from devices like smartwatches and fitness trackers. This development could revolutionize how AI interprets biometric and movement data for healthcare, fitness, and human-computer interaction applications.

AIBullishGoogle Research Blog · Jun 236/105
🧠

Unlocking rich genetic insights through multimodal AI with M-REGLE

The article introduces M-REGLE, a new multimodal AI system designed to unlock genetic insights through advanced artificial intelligence techniques. This represents a significant advancement in the application of AI to genetic research and analysis.

AIBullishNVIDIA AI Blog · Mar 206/104
🧠

Innovation to Impact: How NVIDIA Research Fuels Transformative Work in AI, Graphics and Beyond

NVIDIA's research organization, a global team of around 400 experts established in 2006, serves as the foundation for the company's landmark innovations in AI, accelerated computing, real-time ray tracing, and data center connectivity. The research division spans multiple fields including computer architecture, generative AI, graphics, and robotics, driving transformative technological developments.

Innovation to Impact: How NVIDIA Research Fuels Transformative Work in AI, Graphics and Beyond
AIBullishGoogle DeepMind Blog · Dec 166/107
🧠

State-of-the-art video and image generation with Veo 2 and Imagen 3

Google announces the release of Veo 2, a new state-of-the-art video generation model, along with updates to their Imagen 3 image generation system. The company is also introducing Whisk, a new experimental tool in their AI generation suite.

AIBullishGoogle DeepMind Blog · Oct 236/104
🧠

New generative AI tools open the doors of music creation

Google has launched new AI music creation tools including MusicFX DJ, Music AI Sandbox, and integration with YouTube Shorts. These generative AI technologies aim to democratize music creation by making advanced audio generation capabilities accessible to broader audiences.

AIBullishOpenAI News · Jun 206/105
🧠

Improved Techniques for Training Consistency Models

Consistency models represent a new family of generative AI models that can produce high-quality data samples in a single step without requiring adversarial training methods. This research focuses on developing improved training techniques for these models.

AIBullishHugging Face Blog · May 236/105
🧠

Instruction-tuning Stable Diffusion with InstructPix2Pix

The article discusses InstructPix2Pix, a method for instruction-tuning Stable Diffusion models to enable text-guided image editing. This technique allows users to provide natural language instructions to modify existing images rather than generating new ones from scratch.

AIBullishHugging Face Blog · May 166/105
🧠

Smaller is better: Q8-Chat, an efficient generative AI experience on Xeon

The article discusses Q8-Chat, a more efficient generative AI solution designed to run on Intel Xeon processors. This development focuses on optimizing AI performance through smaller, more efficient models rather than simply scaling up model size.

AINeutralLil'Log (Lilian Weng) · Jul 116/10
🧠

What are Diffusion Models?

Diffusion models are a new type of generative AI model that can learn complex data distributions and generate high-quality images competitive with state-of-the-art GANs. The article covers recent developments including classifier-free guidance, GLIDE, unCLIP, Imagen, latent diffusion models, and consistency models.

AIBullishOpenAI News · Jul 96/108
🧠

Glow: Better reversible generative models

Researchers introduce Glow, a reversible generative AI model that uses invertible 1x1 convolutions to generate high-resolution images with efficient sampling capabilities. The model simplifies previous architectures while enabling feature discovery for data attribute manipulation, with code and visualization tools being made publicly available.

← PrevPage 7 of 9Next →