y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto

#ai-models News & Analysis

199 articles tagged with #ai-models. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.

199 articles
AIBullishHugging Face Blog · Jul 316/106
🧠

Google releases Gemma 2 2B, ShieldGemma and Gemma Scope

Google has released Gemma 2 2B, a smaller 2-billion parameter version of its open-source AI model, alongside ShieldGemma for safety filtering and Gemma Scope for model interpretability. These releases expand Google's Gemma family with more accessible and transparent AI tools for developers and researchers.

AIBullishHugging Face Blog · Jul 306/105
🧠

Memory-efficient Diffusion Transformers with Quanto and Diffusers

The article discusses memory-efficient implementation of Diffusion Transformers using Quanto quantization library integrated with Diffusers. This technical advancement enables running large-scale AI image generation models with reduced memory requirements, making them more accessible for deployment.

AIBullishHugging Face Blog · Jun 276/105
🧠

Welcome Gemma 2 - Google’s new open LLM

Google has released Gemma 2, a new open-source large language model that represents the company's latest advancement in accessible AI technology. The model aims to provide developers and researchers with powerful AI capabilities while maintaining Google's commitment to open-source development.

AINeutralOpenAI News · Mar 296/103
🧠

Navigating the challenges and opportunities of synthetic voices

OpenAI shares insights from a limited preview of Voice Engine, their model for creating synthetic custom voices. The company is exploring the technology's potential while addressing associated challenges and risks.

AIBullishOpenAI News · Jan 256/107
🧠

New embedding models and API updates

OpenAI is launching a new generation of embedding models, updated GPT-4 Turbo and moderation models, along with new API usage management tools. The company also announced upcoming lower pricing for GPT-3.5 Turbo, indicating continued development and cost optimization of their AI model offerings.

AIBullishHugging Face Blog · Dec 56/105
🧠

Goodbye cold boot - how we made LoRA Inference 300% faster

The article title suggests a breakthrough in LoRA (Low-Rank Adaptation) inference performance, claiming a 300% speed improvement by eliminating cold boot issues. This appears to be a technical advancement in AI model optimization that could significantly impact AI inference efficiency.

AIBullishHugging Face Blog · Aug 106/108
🧠

Hugging Face Hub on the AWS Marketplace: Pay with your AWS Account

Hugging Face has made its AI model hub available on AWS Marketplace, allowing users to pay for services directly through their AWS accounts. This integration streamlines billing and procurement for enterprises already using AWS infrastructure.

AIBullishOpenAI News · Jun 136/106
🧠

Function calling and other API updates

An API provider is announcing significant updates to their service including enhanced model steerability, function calling capabilities, extended context windows, and reduced pricing. These improvements represent meaningful advances in AI API functionality and accessibility for developers.

AIBullishHugging Face Blog · May 236/105
🧠

Instruction-tuning Stable Diffusion with InstructPix2Pix

The article discusses InstructPix2Pix, a method for instruction-tuning Stable Diffusion models to enable text-guided image editing. This technique allows users to provide natural language instructions to modify existing images rather than generating new ones from scratch.

AIBullishHugging Face Blog · Dec 16/107
🧠

Probabilistic Time Series Forecasting with 🤗 Transformers

The article discusses probabilistic time series forecasting using Hugging Face Transformers, a machine learning approach for predicting future data points with uncertainty estimates. This technique has applications in financial markets, including cryptocurrency price prediction and risk assessment.

AIBullishHugging Face Blog · Sep 166/106
🧠

Incredibly Fast BLOOM Inference with DeepSpeed and Accelerate

The article discusses optimizations for running BLOOM inference using DeepSpeed and Accelerate frameworks to achieve significantly faster performance. This represents technical advances in making large language model inference more efficient and accessible.

AIBullishHugging Face Blog · Mar 286/106
🧠

Introducing Decision Transformers on Hugging Face 🤗

The article title indicates Hugging Face is introducing Decision Transformers, which represents an advancement in AI model capabilities. However, the article body appears to be empty, limiting detailed analysis of the announcement's scope and implications.

AIBullishOpenAI News · Mar 156/106
🧠

New GPT-3 capabilities: Edit & insert

OpenAI has released new versions of GPT-3 and Codex with enhanced capabilities that allow users to edit and insert content into existing text, rather than only completing text. This represents a significant advancement in AI text editing functionality beyond traditional text generation.

AINeutralarXiv – CS AI · Apr 74/10
🧠

A Model of Understanding in Deep Learning Systems

A new research paper proposes a model for understanding in deep learning systems, arguing that contemporary AI can achieve systematic understanding through internal models that track regularities and support reliable predictions. However, the research suggests this understanding falls short of scientific ideals due to symbolic misalignment and lack of explicit reductive properties.

AINeutralThe Register – AI · Mar 165/10
🧠

Free Software Foundation calls for free-range LLMs rather than factory-farmed AI

The Free Software Foundation is advocating for open-source, community-developed AI models ("free-range LLMs") as an alternative to proprietary AI systems developed by large corporations ("factory-farmed AI"). This represents a push for democratization and transparency in AI development, emphasizing user freedom and community control over AI technology.

AINeutralThe Verge – AI · Mar 155/10
🧠

AI companies want to harvest improv actors’ skills to train AI on human emotion

AI companies are recruiting improv actors through companies like Handshake AI to train AI models on human emotion and authentic character portrayal. This represents a growing trend of AI labs seeking increasingly specialized training data to improve their models' emotional intelligence and human-like responses.

AI companies want to harvest improv actors’ skills to train AI on human emotion
🏢 OpenAI
AIBullishOpenAI News · Mar 65/10
🧠

How Descript enables multilingual video dubbing at scale

Descript leverages OpenAI models to enable scalable multilingual video dubbing by optimizing translations for both semantic accuracy and timing synchronization. This technology allows dubbed speech to sound natural across different languages while maintaining proper video-audio alignment.

🏢 OpenAI
AINeutralarXiv – CS AI · Mar 54/10
🧠

When Visual Evidence is Ambiguous: Pareidolia as a Diagnostic Probe for Vision Models

Researchers developed a framework using face pareidolia (seeing faces in non-face objects) to test how different AI vision models handle ambiguous visual information. The study found that vision-language models like CLIP and LLaVA tend to over-interpret ambiguous patterns, while pure vision models remain more uncertain and detection models are more conservative.

AINeutralMicrosoft Research Blog · Feb 54/102
🧠

Rethinking imitation learning with Predictive Inverse Dynamics Models

Microsoft Research explores Predictive Inverse Dynamics Models (PIDMs) in imitation learning, showing they outperform standard Behavior Cloning by using predictions to reduce ambiguity. The approach enables more efficient learning from fewer demonstrations compared to traditional methods.

← PrevPage 5 of 8Next →