y0news
AnalyticsDigestsSourcesRSSAICrypto
#diffusion-transformers3 articles
3 articles
AIBullisharXiv โ€“ CS AI ยท 5d ago7/104
๐Ÿง 

BWCache: Accelerating Video Diffusion Transformers through Block-Wise Caching

Researchers have developed BWCache, a training-free method that accelerates Diffusion Transformer (DiT) video generation by up to 6ร— through block-wise feature caching and reuse. The technique exploits computational redundancy in DiT blocks across timesteps while maintaining visual quality, addressing a key bottleneck in real-world AI video generation applications.

AIBullisharXiv โ€“ CS AI ยท Feb 277/106
๐Ÿง 

Dual-IPO: Dual-Iterative Preference Optimization for Text-to-Video Generation

Researchers introduce Dual-Iterative Preference Optimization (Dual-IPO), a new method that iteratively improves both reward models and video generation models to create higher-quality AI-generated videos better aligned with human preferences. The approach enables smaller 2B parameter models to outperform larger 5B models without requiring manual preference annotations.

AIBullishHugging Face Blog ยท Jul 306/105
๐Ÿง 

Memory-efficient Diffusion Transformers with Quanto and Diffusers

The article discusses memory-efficient implementation of Diffusion Transformers using Quanto quantization library integrated with Diffusers. This technical advancement enables running large-scale AI image generation models with reduced memory requirements, making them more accessible for deployment.