You Don't Need All That Attention: Surgical Memorization Mitigation in Text-to-Image Diffusion Models
Researchers introduce GUARD, a novel framework to prevent text-to-image AI models from memorizing and reproducing training data that could lead to privacy or copyright issues. The method uses attention attenuation to guide image generation away from original training data while maintaining prompt alignment and image quality.