y0news
AnalyticsDigestsSourcesRSSAICrypto
#reward-model1 article
1 articles
AIBullisharXiv โ€“ CS AI ยท 5d ago6/104
๐Ÿง 

EditReward: A Human-Aligned Reward Model for Instruction-Guided Image Editing

Researchers developed EditReward, a human-aligned reward model for instruction-guided image editing trained on over 200K preference pairs. The model demonstrates superior performance on established benchmarks and can effectively filter high-quality training data, addressing a key bottleneck in open-source image editing models.