y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto

#hugging-face News & Analysis

196 articles tagged with #hugging-face. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.

196 articles
AIBullishHugging Face Blog · Jan 226/106
🧠

Hugging Face and FriendliAI partner to supercharge model deployment on the Hub

Hugging Face and FriendliAI have announced a strategic partnership to enhance AI model deployment capabilities on Hugging Face's platform. This collaboration aims to streamline and accelerate the process of deploying machine learning models, making it easier for developers to implement AI solutions.

AIBullishHugging Face Blog · Oct 96/108
🧠

Scaling AI-based Data Processing with Hugging Face + Dask

The article discusses scaling AI-based data processing using Hugging Face in combination with Dask for distributed computing. This approach enables efficient handling of large-scale machine learning workloads by leveraging parallel processing capabilities.

AIBullishHugging Face Blog · Sep 46/106
🧠

Hugging Face partners with TruffleHog to Scan for Secrets

Hugging Face has partnered with TruffleHog to implement automated secret scanning across their AI model repository platform. This collaboration aims to enhance security by detecting exposed API keys, tokens, and other sensitive credentials in code and model repositories.

AIBullishHugging Face Blog · Aug 86/105
🧠

XetHub is joining Hugging Face!

XetHub, a data versioning and collaboration platform, is being acquired by Hugging Face, the leading AI model repository and platform. This acquisition strengthens Hugging Face's data infrastructure capabilities and expands their ecosystem for AI development workflows.

AIBullishHugging Face Blog · Jul 296/105
🧠

Serverless Inference with Hugging Face and NVIDIA NIM

Hugging Face has partnered with NVIDIA to integrate NIM (NVIDIA Inference Microservices) for serverless AI model inference. This collaboration enables developers to deploy and scale AI models more efficiently using NVIDIA's optimized inference infrastructure through Hugging Face's platform.

AIBullishHugging Face Blog · Jul 96/105
🧠

Google Cloud TPUs made available to Hugging Face users

Google Cloud has made its Tensor Processing Units (TPUs) available to Hugging Face users, enabling access to specialized AI hardware for machine learning workloads. This partnership expands computational resources for the AI development community using Hugging Face's platform.

AIBullishHugging Face Blog · Jun 76/106
🧠

Introducing the Hugging Face Embedding Container for Amazon SageMaker

Hugging Face has launched a new Embedding Container for Amazon SageMaker, enabling easier deployment of embedding models in AWS cloud infrastructure. This integration streamlines the process for developers to implement text embeddings and vector search capabilities in production environments.

AIBullishHugging Face Blog · Apr 166/104
🧠

Running Privacy-Preserving Inferences on Hugging Face Endpoints

The article discusses methods for running privacy-preserving machine learning inferences on Hugging Face endpoints. This technology allows users to perform AI model computations while protecting sensitive input data from being exposed to the service provider.

AIBullishHugging Face Blog · Feb 86/104
🧠

From OpenAI to Open LLMs with Messages API on Hugging Face

The article discusses the transition from OpenAI's proprietary models to open-source large language models (LLMs) using Hugging Face's Messages API. This development provides developers with more accessible and customizable AI model deployment options outside of closed ecosystems.

AIBullishHugging Face Blog · Feb 16/106
🧠

Hugging Face Text Generation Inference available for AWS Inferentia2

Hugging Face has made its Text Generation Inference (TGI) service available on AWS Inferentia2 chips, enabling more cost-effective deployment of large language models. This integration allows developers to leverage AWS's custom AI inference chips for running text generation workloads with improved performance and reduced costs.

AIBullishHugging Face Blog · Jan 106/108
🧠

Make LLM Fine-tuning 2x faster with Unsloth and 🤗 TRL

Unsloth has partnered with Hugging Face's TRL (Transformer Reinforcement Learning) library to make LLM fine-tuning 2x faster. This collaboration aims to improve the efficiency of training and customizing large language models for developers and researchers.

AIBullishHugging Face Blog · Dec 56/104
🧠

AMD + 🤗: Large Language Models Out-of-the-Box Acceleration with AMD GPU

AMD has partnered with Hugging Face to provide out-of-the-box acceleration for Large Language Models on AMD GPUs. This collaboration aims to make AMD's GPU hardware more accessible for AI developers and researchers working with popular open-source AI models.

AIBullishHugging Face Blog · Oct 46/107
🧠

Accelerating over 130,000 Hugging Face models with ONNX Runtime

Microsoft's ONNX Runtime now supports over 130,000 Hugging Face models, providing significant performance improvements for AI model inference. This integration enables faster deployment and execution of popular machine learning models across various hardware platforms.

AIBullishHugging Face Blog · Sep 196/107
🧠

Rocket Money x Hugging Face: Scaling Volatile ML Models in Production​

Rocket Money partnered with Hugging Face to address challenges in scaling volatile machine learning models for production environments. The collaboration focuses on implementing robust infrastructure solutions to handle ML model instability and performance variations in real-world applications.

AI × CryptoBullishHugging Face Blog · Sep 16/105
🤖

Fetch Cuts ML Processing Latency by 50% Using Amazon SageMaker & Hugging Face

Fetch.ai has successfully reduced machine learning processing latency by 50% through implementation of Amazon SageMaker and Hugging Face technologies. This technical improvement enhances the performance of Fetch's AI infrastructure and could strengthen its competitive position in the AI-crypto space.

AIBullishHugging Face Blog · Aug 106/108
🧠

Hugging Face Hub on the AWS Marketplace: Pay with your AWS Account

Hugging Face has made its AI model hub available on AWS Marketplace, allowing users to pay for services directly through their AWS accounts. This integration streamlines billing and procurement for enterprises already using AWS infrastructure.

AINeutralHugging Face Blog · Jul 246/106
🧠

AI Policy @🤗: Open ML Considerations in the EU AI Act

The article appears to be about AI policy considerations related to open machine learning in the context of the EU AI Act. However, the article body was not provided, making detailed analysis impossible.

AIBullishHugging Face Blog · Jun 76/104
🧠

DuckDB: analyze 50,000+ datasets stored on the Hugging Face Hub

DuckDB has integrated with Hugging Face Hub to enable analysis of over 50,000 datasets directly through SQL queries. This integration allows data scientists and researchers to perform analytics on massive datasets hosted on Hugging Face without needing to download them locally.

AIBullishHugging Face Blog · May 316/106
🧠

Introducing the Hugging Face LLM Inference Container for Amazon SageMaker

Hugging Face has launched an LLM Inference Container for Amazon SageMaker, enabling easier deployment and scaling of large language models on AWS infrastructure. This integration streamlines the process for developers to host and serve AI models in production environments.

AIBullishHugging Face Blog · May 256/106
🧠

Optimizing Stable Diffusion for Intel CPUs with NNCF and 🤗 Optimum

Intel has released optimization techniques for running Stable Diffusion AI models on CPUs using NNCF (Neural Network Compression Framework) and Hugging Face Optimum. These optimizations aim to improve performance and reduce computational requirements for AI image generation on Intel hardware without requiring expensive GPUs.

← PrevPage 2 of 8Next →