y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto

#machine-learning News & Analysis

2519 articles tagged with #machine-learning. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.

2519 articles
AIBullishGoogle Research Blog · Jun 236/105
🧠

Unlocking rich genetic insights through multimodal AI with M-REGLE

The article introduces M-REGLE, a new multimodal AI system designed to unlock genetic insights through advanced artificial intelligence techniques. This represents a significant advancement in the application of AI to genetic research and analysis.

AIBullishGoogle DeepMind Blog · Jun 176/106
🧠

Gemini 2.5: Updates to our family of thinking models

Google announces updates to its Gemini 2.5 AI model family, with Gemini 2.5 Pro now stable, Flash model reaching general availability, and a new Flash-Lite variant entering preview. These updates focus on enhanced performance and accuracy across the model lineup.

AIBullishGoogle DeepMind Blog · Jun 126/104
🧠

How we're supporting better tropical cyclone prediction with AI

Google is launching Weather Lab with experimental cyclone prediction capabilities and partnering with the U.S. National Hurricane Center to enhance weather forecasting. This initiative leverages AI technology to improve tropical cyclone prediction accuracy and support official weather warnings.

AIBullishHugging Face Blog · Jun 36/106
🧠

SmolVLA: Efficient Vision-Language-Action Model trained on Lerobot Community Data

SmolVLA is a new efficient vision-language-action model that has been trained using data from the Lerobot community. This represents an advancement in AI models that can process visual and language inputs to generate actions, potentially improving robotic and automation applications.

AIBullishHugging Face Blog · Jun 36/105
🧠

No GPU left behind: Unlocking Efficiency with Co-located vLLM in TRL

The article discusses optimizing GPU efficiency using co-located vLLM (virtual Large Language Model) infrastructure in TRL (Transformer Reinforcement Learning). This approach aims to maximize GPU utilization and reduce computational waste in AI model training and deployment.

AIBullishGoogle DeepMind Blog · May 206/102
🧠

Gemini 2.5: Our most intelligent models are getting even better

Google announces updates to its Gemini AI models, with Gemini 2.5 Pro maintaining its position as the preferred coding model for developers and 2.5 Flash receiving improvements. The company introduces Deep Think, an experimental enhanced reasoning mode for the 2.5 Pro model.

AIBullishGoogle DeepMind Blog · May 206/105
🧠

Advancing Gemini's security safeguards

Google has announced that Gemini 2.5 is their most secure AI model family to date, highlighting enhanced security safeguards. The announcement suggests continued improvements in AI safety and security measures for their flagship language model.

AIBullishLil'Log (Lilian Weng) · May 16/10
🧠

Why We Think

This article introduces a review of recent developments in test-time compute and Chain-of-thought (CoT) techniques for AI models. The post examines how providing models with 'thinking time' during inference leads to significant performance improvements while raising new research questions.

AIBullishOpenAI News · Apr 246/104
🧠

New in ChatGPT for Business: April 2025

ChatGPT for Business introduces new features in April 2025 including the o3 model, image generation capabilities, enhanced memory functionality, and internal knowledge systems. The announcement includes hands-on demonstrations of these business-focused AI tools and capabilities.

AIBullishOpenAI News · Apr 236/106
🧠

Introducing our latest image generation model in the API

A new image generation model called 'gpt-image-1' is now available through an API, allowing developers and businesses to integrate professional-grade visual creation capabilities directly into their applications and platforms. This represents an expansion of AI-powered content generation tools for commercial use.

AINeutralOpenAI News · Mar 266/107
🧠

Security on the path to AGI

OpenAI is implementing comprehensive security measures directly into their infrastructure and models as they progress toward artificial general intelligence (AGI). The company emphasizes proactive adaptation to address security challenges on the path to AGI development.

AIBullishGoogle DeepMind Blog · Mar 126/105
🧠

Introducing Gemma 3

Google has announced Gemma 3, positioning it as their most capable AI model that can run on a single GPU or TPU. This represents a significant advancement in making powerful AI models more accessible for individual developers and smaller organizations.

AIBullishHugging Face Blog · Mar 126/107
🧠

Welcome Gemma 3: Google's all new multimodal, multilingual, long context open LLM

Google has announced Gemma 3, their latest open-source large language model featuring multimodal capabilities, multilingual support, and extended context length. The article title suggests this represents a significant advancement in Google's open LLM offerings, though specific technical details and capabilities are not provided in the given content.

AIBullishHugging Face Blog · Feb 216/106
🧠

SigLIP 2: A better multilingual vision language encoder

SigLIP 2 represents an advancement in multilingual vision-language encoding technology, building upon the original SigLIP model. This improved encoder aims to better understand and process visual content across multiple languages, potentially enhancing AI applications that require cross-lingual visual comprehension.

AIBullishHugging Face Blog · Feb 196/104
🧠

PaliGemma 2 Mix - New Instruction Vision Language Models by Google

Google has released PaliGemma 2 Mix, a new series of instruction-tuned vision-language models that can process both text and images. These models represent an advancement in multimodal AI capabilities, allowing for more sophisticated visual understanding and instruction-following tasks.

AINeutralOpenAI News · Feb 125/104
🧠

Sharing the latest Model Spec

OpenAI has released updates to their Model Spec, incorporating external feedback and ongoing research to better shape AI model behavior. The updates represent continued efforts to refine guidelines for AI model development and deployment.

AINeutralOpenAI News · Jan 225/105
🧠

Trading inference-time compute for adversarial robustness

The article discusses research on trading computational resources during inference time to improve adversarial robustness in AI systems. This approach explores how allocating more compute power at inference can enhance model security against adversarial attacks.

AIBullishHugging Face Blog · Jan 226/106
🧠

Hugging Face and FriendliAI partner to supercharge model deployment on the Hub

Hugging Face and FriendliAI have announced a strategic partnership to enhance AI model deployment capabilities on Hugging Face's platform. This collaboration aims to streamline and accelerate the process of deploying machine learning models, making it easier for developers to implement AI solutions.

AIBullishGoogle DeepMind Blog · Dec 176/103
🧠

FACTS Grounding: A new benchmark for evaluating the factuality of large language models

Researchers have introduced FACTS Grounding, a new benchmark designed to evaluate how accurately large language models ground their responses in source material and avoid hallucinations. The benchmark includes a comprehensive evaluation system and online leaderboard to measure LLM factuality performance.

AIBullishGoogle DeepMind Blog · Dec 56/104
🧠

Google DeepMind at NeurIPS 2024

Google DeepMind presents research at NeurIPS 2024 focused on advancing adaptive AI agents, empowering 3D scene creation capabilities, and developing innovations in large language model training. The research aims to create smarter and safer AI systems for future applications.

AIBullishHugging Face Blog · Nov 206/104
🧠

Faster Text Generation with Self-Speculative Decoding

The article discusses self-speculative decoding, a technique for accelerating text generation in AI language models. This method appears to improve inference speed, which could have significant implications for AI model deployment and efficiency.

AIBullishHugging Face Blog · Oct 226/105
🧠

Transformers.js v3: WebGPU Support, New Models & Tasks, and More…

Transformers.js v3 has been released with major upgrades including WebGPU support for enhanced performance, new AI models and tasks capabilities. This update represents a significant advancement in browser-based machine learning infrastructure.

← PrevPage 64 of 101Next →