196 articles tagged with #hugging-face. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.
AIBullishHugging Face Blog · Jan 226/106
🧠Hugging Face and FriendliAI have announced a strategic partnership to enhance AI model deployment capabilities on Hugging Face's platform. This collaboration aims to streamline and accelerate the process of deploying machine learning models, making it easier for developers to implement AI solutions.
AIBullishHugging Face Blog · Oct 226/103
🧠Hugging Face has partnered with Protect AI to enhance security for machine learning models in their platform. This collaboration aims to provide better security tools and protections for the ML community using Hugging Face's model repository and services.
AIBullishHugging Face Blog · Oct 96/108
🧠The article discusses scaling AI-based data processing using Hugging Face in combination with Dask for distributed computing. This approach enables efficient handling of large-scale machine learning workloads by leveraging parallel processing capabilities.
AIBullishHugging Face Blog · Sep 46/106
🧠Hugging Face has partnered with TruffleHog to implement automated secret scanning across their AI model repository platform. This collaboration aims to enhance security by detecting exposed API keys, tokens, and other sensitive credentials in code and model repositories.
AIBullishHugging Face Blog · Aug 86/105
🧠XetHub, a data versioning and collaboration platform, is being acquired by Hugging Face, the leading AI model repository and platform. This acquisition strengthens Hugging Face's data infrastructure capabilities and expands their ecosystem for AI development workflows.
AIBullishHugging Face Blog · Jul 296/105
🧠Hugging Face has partnered with NVIDIA to integrate NIM (NVIDIA Inference Microservices) for serverless AI model inference. This collaboration enables developers to deploy and scale AI models more efficiently using NVIDIA's optimized inference infrastructure through Hugging Face's platform.
AIBullishHugging Face Blog · Jul 96/105
🧠Google Cloud has made its Tensor Processing Units (TPUs) available to Hugging Face users, enabling access to specialized AI hardware for machine learning workloads. This partnership expands computational resources for the AI development community using Hugging Face's platform.
AIBullishHugging Face Blog · Jun 76/106
🧠Hugging Face has launched a new Embedding Container for Amazon SageMaker, enabling easier deployment of embedding models in AWS cloud infrastructure. This integration streamlines the process for developers to implement text embeddings and vector search capabilities in production environments.
AIBullishHugging Face Blog · Apr 166/104
🧠The article discusses methods for running privacy-preserving machine learning inferences on Hugging Face endpoints. This technology allows users to perform AI model computations while protecting sensitive input data from being exposed to the service provider.
AIBullishHugging Face Blog · Apr 46/108
🧠Hugging Face has partnered with Wiz Research to enhance AI security measures. This collaboration aims to improve security protocols and protect AI models and datasets on the Hugging Face platform.
AIBullishHugging Face Blog · Feb 86/104
🧠The article discusses the transition from OpenAI's proprietary models to open-source large language models (LLMs) using Hugging Face's Messages API. This development provides developers with more accessible and customizable AI model deployment options outside of closed ecosystems.
AIBullishHugging Face Blog · Feb 16/106
🧠Hugging Face has made its Text Generation Inference (TGI) service available on AWS Inferentia2 chips, enabling more cost-effective deployment of large language models. This integration allows developers to leverage AWS's custom AI inference chips for running text generation workloads with improved performance and reduced costs.
AIBullishHugging Face Blog · Jan 106/108
🧠Unsloth has partnered with Hugging Face's TRL (Transformer Reinforcement Learning) library to make LLM fine-tuning 2x faster. This collaboration aims to improve the efficiency of training and customizing large language models for developers and researchers.
AIBullishHugging Face Blog · Dec 56/104
🧠AMD has partnered with Hugging Face to provide out-of-the-box acceleration for Large Language Models on AMD GPUs. This collaboration aims to make AMD's GPU hardware more accessible for AI developers and researchers working with popular open-source AI models.
AIBullishHugging Face Blog · Oct 46/107
🧠Microsoft's ONNX Runtime now supports over 130,000 Hugging Face models, providing significant performance improvements for AI model inference. This integration enables faster deployment and execution of popular machine learning models across various hardware platforms.
AIBullishHugging Face Blog · Sep 196/107
🧠Rocket Money partnered with Hugging Face to address challenges in scaling volatile machine learning models for production environments. The collaboration focuses on implementing robust infrastructure solutions to handle ML model instability and performance variations in real-world applications.
AI × CryptoBullishHugging Face Blog · Sep 16/105
🤖Fetch.ai has successfully reduced machine learning processing latency by 50% through implementation of Amazon SageMaker and Hugging Face technologies. This technical improvement enhances the performance of Fetch's AI infrastructure and could strengthen its competitive position in the AI-crypto space.
AIBullishHugging Face Blog · Aug 106/108
🧠Hugging Face has made its AI model hub available on AWS Marketplace, allowing users to pay for services directly through their AWS accounts. This integration streamlines billing and procurement for enterprises already using AWS infrastructure.
AINeutralHugging Face Blog · Jul 246/106
🧠The article appears to be about AI policy considerations related to open machine learning in the context of the EU AI Act. However, the article body was not provided, making detailed analysis impossible.
AIBullishHugging Face Blog · Jun 136/105
🧠Hugging Face and AMD have announced a partnership to optimize and accelerate state-of-the-art AI models for both CPU and GPU platforms. This collaboration aims to improve performance and accessibility of AI models across AMD's hardware ecosystem.
AIBullishHugging Face Blog · Jun 76/104
🧠DuckDB has integrated with Hugging Face Hub to enable analysis of over 50,000 datasets directly through SQL queries. This integration allows data scientists and researchers to perform analytics on massive datasets hosted on Hugging Face without needing to download them locally.
AIBullishHugging Face Blog · May 316/106
🧠Hugging Face has launched an LLM Inference Container for Amazon SageMaker, enabling easier deployment and scaling of large language models on AWS infrastructure. This integration streamlines the process for developers to host and serve AI models in production environments.
AIBullishHugging Face Blog · May 256/106
🧠Intel has released optimization techniques for running Stable Diffusion AI models on CPUs using NNCF (Neural Network Compression Framework) and Hugging Face Optimum. These optimizations aim to improve performance and reduce computational requirements for AI image generation on Intel hardware without requiring expensive GPUs.
AIBullishHugging Face Blog · May 246/105
🧠Hugging Face has partnered with Microsoft to launch the Hugging Face Model Catalog on Azure, expanding access to AI models through Microsoft's cloud platform. This collaboration aims to make AI model deployment and integration more accessible for enterprise customers using Azure services.
AIBullishHugging Face Blog · May 156/106
🧠Hugging Face has been selected to participate in the French Data Protection Agency's (CNIL) enhanced support program. This program provides regulatory guidance and support to help companies navigate data protection compliance requirements in France.