y0news
AnalyticsDigestsSourcesRSSAICrypto
#flux4 articles
4 articles
AIBullisharXiv โ€“ CS AI ยท 5d ago6/103
๐Ÿง 

Does FLUX Already Know How to Perform Physically Plausible Image Composition?

Researchers introduce SHINE, a training-free framework that enables FLUX and other diffusion models to perform high-quality image composition without retraining. The framework addresses complex lighting scenarios like shadows and reflections, achieving state-of-the-art performance on new benchmark ComplexCompo.

AIBullisharXiv โ€“ CS AI ยท 5d ago6/104
๐Ÿง 

DragFlow: Unleashing DiT Priors with Region Based Supervision for Drag Editing

DragFlow introduces the first framework to leverage FLUX's DiT priors for drag-based image editing, addressing distortion issues that plagued earlier Stable Diffusion-based approaches. The system uses region-based editing with affine transformations instead of point-based supervision, achieving state-of-the-art results on benchmarks.

AIBullisharXiv โ€“ CS AI ยท 5d ago6/103
๐Ÿง 

MeanCache: From Instantaneous to Average Velocity for Accelerating Flow Matching Inference

MeanCache introduces a training-free caching framework that accelerates Flow Matching inference by using average velocities instead of instantaneous ones. The framework achieves 3.59X to 4.56X acceleration on major AI models like FLUX.1, Qwen-Image, and HunyuanVideo while maintaining superior generation quality compared to existing caching methods.

AIBullishHugging Face Blog ยท Jul 234/108
๐Ÿง 

Fast LoRA inference for Flux with Diffusers and PEFT

The article discusses technical improvements for Fast LoRA inference when working with Flux models using Diffusers and PEFT libraries. This represents an advancement in AI model optimization, specifically focusing on efficient fine-tuning and inference capabilities for diffusion models.