31 articles tagged with #stable-diffusion. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.
AIBullisharXiv – CS AI · Mar 177/10
🧠Researchers introduce MapReduce LoRA and Reward-aware Token Embedding (RaTE) to optimize multiple preferences in generative AI models without degrading performance across dimensions. The methods show significant improvements across text-to-image, text-to-video, and language tasks, with gains ranging from 4.3% to 136.7% on various benchmarks.
🧠 Llama🧠 Stable Diffusion
AIBullisharXiv – CS AI · Mar 47/102
🧠Researchers present P-GRAFT, a new method for fine-tuning diffusion models by shaping distributions at intermediate noise levels, showing improved performance on text-to-image generation tasks. The framework achieved an 8.81% relative improvement over base Stable Diffusion v2 model on popular benchmarks.
AINeutralarXiv – CS AI · 3d ago6/10
🧠Researchers introduce GLEaN, a visual explainability method that transforms complex AI bias detection into understandable portrait composites, enabling non-technical audiences to grasp how text-to-image models like Stable Diffusion XL associate occupations and identities with specific demographic characteristics.
🧠 Stable Diffusion
AIBullisharXiv – CS AI · 3d ago6/10
🧠Researchers present a novel closed-form method for concept erasure in generative AI models that removes unwanted concepts without iterative training. The technique uses linear transformations and two sequential projection steps to safely edit pretrained models like Stable Diffusion and FLUX while preserving unrelated concepts, completing the process in seconds.
🧠 Stable Diffusion
AIBullisharXiv – CS AI · Mar 176/10
🧠Researchers introduce RAZOR, a new framework for efficiently removing sensitive information from AI models like CLIP and Stable Diffusion without requiring full retraining. The method selectively edits specific layers and attention heads in transformer models to achieve targeted 'unlearning' while preserving overall performance.
🧠 Stable Diffusion
AINeutralarXiv – CS AI · Mar 37/107
🧠Researchers introduce SurgUn, a surgical unlearning method for text-to-image diffusion models that enables precise removal of specific visual concepts while preserving other capabilities. The approach addresses challenges in copyright compliance and content policy enforcement by applying targeted weight-space updates based on retroactive interference theory.
AIBullisharXiv – CS AI · Mar 36/104
🧠DragFlow introduces the first framework to leverage FLUX's DiT priors for drag-based image editing, addressing distortion issues that plagued earlier Stable Diffusion-based approaches. The system uses region-based editing with affine transformations instead of point-based supervision, achieving state-of-the-art results on benchmarks.
AIBullishHugging Face Blog · Aug 16/106
🧠Stability AI has open-sourced knowledge distillation code and model weights for SD-Small and SD-Tiny, making smaller and more efficient versions of Stable Diffusion available to the community. This release enables developers to run image generation models with reduced computational requirements while maintaining reasonable quality.
AIBullishHugging Face Blog · Jun 156/105
🧠Apple has announced faster Stable Diffusion implementation using Core ML framework for iPhone, iPad, and Mac devices. This development enables on-device AI image generation with improved performance and efficiency across Apple's ecosystem.
AIBullishHugging Face Blog · May 256/106
🧠Intel has released optimization techniques for running Stable Diffusion AI models on CPUs using NNCF (Neural Network Compression Framework) and Hugging Face Optimum. These optimizations aim to improve performance and reduce computational requirements for AI image generation on Intel hardware without requiring expensive GPUs.
AIBullishHugging Face Blog · May 236/105
🧠The article discusses InstructPix2Pix, a method for instruction-tuning Stable Diffusion models to enable text-guided image editing. This technique allows users to provide natural language instructions to modify existing images rather than generating new ones from scratch.
AIBullisharXiv – CS AI · Mar 34/104
🧠Researchers propose TADSR, a Time-Aware one-step Diffusion Network that improves real-world image super-resolution by dynamically varying timesteps instead of using fixed ones. The method achieves state-of-the-art performance while allowing controllable trade-offs between image fidelity and realism in a single processing step.
AIBullishHugging Face Blog · Oct 225/105
🧠The article title indicates that Diffusers, a popular machine learning library, has added support for Stable Diffusion 3.5 Large model. However, no article body content was provided for analysis.
AIBullishHugging Face Blog · Jan 155/104
🧠The article discusses optimization techniques for accelerating SD Turbo and SDXL Turbo inference using ONNX Runtime and Olive. These tools provide performance improvements for running Stable Diffusion models more efficiently.
AIBullishHugging Face Blog · Oct 35/105
🧠Google demonstrates accelerated inference performance for Stable Diffusion XL using JAX framework on their Cloud TPU v5e hardware. This technical advancement showcases improved efficiency for AI image generation workloads on Google's cloud infrastructure.
AINeutralHugging Face Blog · Sep 294/107
🧠The article appears to be about finetuning Stable Diffusion models using DDPO (likely Denoising Diffusion Policy Optimization) via TRL (Transformer Reinforcement Learning). However, the article body is empty, preventing detailed analysis of the technical implementation or implications.
AINeutralHugging Face Blog · Sep 84/103
🧠The article title suggests a technical development regarding T2I-Adapters for SDXL (Stable Diffusion XL), focusing on efficient controllable generation capabilities. However, no article body content was provided for analysis.
AIBullishHugging Face Blog · Jul 274/103
🧠The article appears to discuss the implementation of Stable Diffusion XL on Mac systems using advanced Core ML quantization techniques. This represents a technical advancement in running AI image generation models efficiently on Apple hardware.
AINeutralHugging Face Blog · Jul 144/106
🧠The article title mentions fine-tuning Stable Diffusion models on Intel CPUs, suggesting content about AI model optimization on consumer hardware. However, no article body content was provided for analysis.
AIBullishHugging Face Blog · Mar 284/106
🧠The article discusses techniques and optimizations for accelerating Stable Diffusion inference on Intel CPU architectures. This focuses on improving AI image generation performance without requiring specialized GPU hardware.
AINeutralHugging Face Blog · Feb 244/105
🧠Swift Diffusers is a new implementation enabling fast Stable Diffusion image generation on Mac computers. The project appears to focus on optimizing AI image generation performance for Apple's hardware ecosystem.
AINeutralHugging Face Blog · Jan 264/104
🧠The article appears to discuss LoRA (Low-Rank Adaptation) techniques for efficiently fine-tuning Stable Diffusion models. However, the article body is empty, preventing detailed analysis of the content and implications.
AIBullishHugging Face Blog · Dec 94/108
🧠The article appears to discuss Hugging Face's integration with the Elixir programming community, potentially bringing AI models like GPT-2 and Stable Diffusion to Elixir developers. However, the article body appears to be empty or not provided, limiting detailed analysis.
AINeutralHugging Face Blog · Oct 134/106
🧠The article appears to announce or discuss the implementation of Stable Diffusion, a popular AI image generation model, using JAX and Flax frameworks. However, the article body is empty, limiting analysis to the title only.
AINeutralHugging Face Blog · Nov 93/105
🧠The article appears to be about SDXL (Stable Diffusion XL) implementation using Latent Consistency LoRAs in a 4-step process. However, the article body is empty, making detailed analysis impossible.