AIBullisharXiv โ CS AI ยท 5d ago6/104
๐ง
EditReward: A Human-Aligned Reward Model for Instruction-Guided Image Editing
Researchers developed EditReward, a human-aligned reward model for instruction-guided image editing trained on over 200K preference pairs. The model demonstrates superior performance on established benchmarks and can effectively filter high-quality training data, addressing a key bottleneck in open-source image editing models.