y0news
AnalyticsDigestsSourcesRSSAICrypto
#intel20 articles
20 articles
AIBullishHugging Face Blog · Apr 296/107
🧠

Introducing AutoRound: Intel’s Advanced Quantization for LLMs and VLMs

Intel has introduced AutoRound, an advanced quantization technique designed to optimize Large Language Models (LLMs) and Vision-Language Models (VLMs). This technology aims to reduce model size and computational requirements while maintaining performance quality for AI applications.

AIBullishHugging Face Blog · Mar 206/104
🧠

A Chatbot on your Laptop: Phi-2 on Intel Meteor Lake

The article discusses running Microsoft's Phi-2 chatbot model locally on Intel's Meteor Lake processors. This represents a significant advancement in bringing AI capabilities directly to consumer laptops without requiring cloud connectivity.

AIBullishHugging Face Blog · May 256/106
🧠

Optimizing Stable Diffusion for Intel CPUs with NNCF and 🤗 Optimum

Intel has released optimization techniques for running Stable Diffusion AI models on CPUs using NNCF (Neural Network Compression Framework) and Hugging Face Optimum. These optimizations aim to improve performance and reduce computational requirements for AI image generation on Intel hardware without requiring expensive GPUs.

AIBullishHugging Face Blog · Jun 156/104
🧠

Intel and Hugging Face Partner to Democratize Machine Learning Hardware Acceleration

Intel has partnered with Hugging Face to democratize machine learning hardware acceleration, making AI model deployment more accessible across different hardware platforms. This collaboration aims to optimize AI workloads on Intel hardware while leveraging Hugging Face's extensive model ecosystem.

AINeutralHugging Face Blog · Oct 154/104
🧠

Get your VLM running in 3 simple steps on Intel CPUs

The article provides a tutorial on setting up and running Vision Language Models (VLM) on Intel CPUs in three simple steps. This appears to be a technical guide aimed at making VLM deployment more accessible for developers and researchers working with AI models on Intel hardware.

AINeutralHugging Face Blog · Dec 174/105
🧠

Benchmarking Language Model Performance on 5th Gen Xeon at GCP

The article title suggests a benchmark analysis of language model performance using Intel's 5th generation Xeon processors on Google Cloud Platform. However, the article body appears to be empty or unavailable, preventing detailed analysis of the actual performance results or technical findings.

AIBullishHugging Face Blog · Jul 35/105
🧠

Accelerating Protein Language Model ProtST on Intel Gaudi 2

Intel has developed optimizations to accelerate the ProtST protein language model on their Gaudi 2 AI accelerator hardware. This advancement demonstrates Intel's commitment to supporting specialized AI workloads in computational biology and scientific research applications.

AINeutralHugging Face Blog · Jun 44/107
🧠

Faster assisted generation support for Intel Gaudi

The article title indicates enhanced assisted generation support for Intel Gaudi processors, suggesting improvements to AI inference capabilities. However, the article body appears to be empty, limiting detailed analysis of the specific enhancements or their implications.

AIBullishHugging Face Blog · Mar 155/106
🧠

CPU Optimized Embeddings with 🤗 Optimum Intel and fastRAG

The article appears to discuss CPU optimization techniques for embeddings using Hugging Face's Optimum Intel library and fastRAG framework. This represents technical advancement in making AI inference more efficient on CPU hardware rather than requiring expensive GPU resources.

AINeutralHugging Face Blog · Feb 294/104
🧠

Text-Generation Pipeline on Intel® Gaudi® 2 AI Accelerator

Intel has released documentation and implementation details for running text-generation pipelines on their Gaudi 2 AI accelerator hardware. This represents Intel's continued effort to compete in the AI hardware market against NVIDIA's dominant position.

AINeutralHugging Face Blog · Jul 144/106
🧠

Fine-tuning Stable Diffusion models on Intel CPUs

The article title mentions fine-tuning Stable Diffusion models on Intel CPUs, suggesting content about AI model optimization on consumer hardware. However, no article body content was provided for analysis.

AINeutralHugging Face Blog · Jun 294/104
🧠

Accelerating Vision-Language Models: BridgeTower on Habana Gaudi2

The article appears to discuss BridgeTower, a vision-language AI model, running on Intel's Habana Gaudi2 processors for accelerated performance. However, the article body is empty, making detailed analysis impossible.

AIBullishHugging Face Blog · Mar 284/106
🧠

Accelerating Stable Diffusion Inference on Intel CPUs

The article discusses techniques and optimizations for accelerating Stable Diffusion inference on Intel CPU architectures. This focuses on improving AI image generation performance without requiring specialized GPU hardware.

AINeutralHugging Face Blog · Jan 24/105
🧠

Accelerating PyTorch Transformers with Intel Sapphire Rapids - part 1

The article title suggests content about optimizing PyTorch Transformers using Intel's Sapphire Rapids processors, indicating a technical deep-dive into AI model acceleration hardware. However, the article body appears to be empty or not provided, preventing detailed analysis of the actual implementation details or performance improvements.

AIBullishHugging Face Blog · Nov 194/105
🧠

Accelerating PyTorch distributed fine-tuning with Intel technologies

The article discusses methods for accelerating PyTorch distributed fine-tuning using Intel's hardware and software technologies. It focuses on optimizations for training deep learning models more efficiently on Intel infrastructure.