y0news
← Feed
Back to feed
🧠 AI🟢 BullishImportance 6/10

What Helps -- and What Hurts: Bidirectional Explanations for Vision Transformers

arXiv – CS AI|Qin Su, Tie Luo||6 views
🤖AI Summary

Researchers propose BiCAM, a new method for interpreting Vision Transformer (ViT) decisions that captures both positive and negative contributions to predictions. The approach improves explanation quality and enables adversarial example detection across multiple ViT variants without requiring model retraining.

Key Takeaways
  • BiCAM introduces bidirectional class activation mapping that preserves both supportive and suppressive signals in ViT explanations.
  • The method includes a Positive-to-Negative Ratio (PNR) metric that can detect adversarial examples without retraining models.
  • BiCAM demonstrates improved localization and faithfulness across ImageNet, VOC, and COCO datasets while remaining computationally efficient.
  • The approach generalizes successfully to multiple ViT variants including DeiT and Swin transformers.
  • Results highlight the importance of modeling both positive and negative evidence for better transformer interpretability.
Read Original →via arXiv – CS AI
Act on this with AI
Stay ahead of the market.
Connect your wallet to an AI agent. It reads balances, proposes swaps and bridges across 15 chains — you keep full control of your keys.
Connect Wallet to AI →How it works
Related Articles