y0news
← Feed
Back to feed
🧠 AI🔴 BearishImportance 6/10

Can VLMs Truly Forget? Benchmarking Training-Free Visual Concept Unlearning

arXiv – CS AI|Zhangyun Tan, Zeliang Zhang, Susan Liang, Yolo Yunlong Tang, Lisha Chen, Chenliang Xu|
🤖AI Summary

Researchers introduce VLM-UnBench, the first benchmark for evaluating training-free visual concept unlearning in Vision Language Models. The study reveals that realistic prompts fail to genuinely remove sensitive or copyrighted visual concepts, with meaningful suppression only occurring under oracle conditions that explicitly disclose target concepts.

Key Takeaways
  • VLM-UnBench is the first comprehensive benchmark for testing training-free visual concept unlearning across 4 forgetting levels, 7 datasets, and 11 concept axes.
  • Realistic unlearning prompts leave forget accuracy near baseline levels, showing minimal genuine concept removal.
  • Object and scene concepts are most resistant to suppression through prompt-based methods.
  • Stronger instruction-tuned models maintain capabilities despite explicit forget instructions.
  • There is a significant gap between prompt-level suppression and true visual concept erasure in VLMs.
Read Original →via arXiv – CS AI
Act on this with AI
Stay ahead of the market.
Connect your wallet to an AI agent. It reads balances, proposes swaps and bridges across 15 chains — you keep full control of your keys.
Connect Wallet to AI →How it works
Related Articles