y0news
← Feed
←Back to feed
🧠 AI🟒 BullishImportance 6/10

ViCLIP-OT: The First Foundation Vision-Language Model for Vietnamese Image-Text Retrieval with Optimal Transport

arXiv – CS AI|Quoc-Khang Tran, Minh-Thien Nguyen, Nguyen-Khang Pham||6 views
πŸ€–AI Summary

Researchers introduced ViCLIP-OT, the first foundation vision-language model specifically designed for Vietnamese image-text retrieval. The model integrates CLIP-style contrastive learning with Similarity-Graph Regularized Optimal Transport (SIGROT) loss, achieving significant improvements over existing baselines with 67.34% average Recall@K on UIT-OpenViIC benchmark.

Key Takeaways
  • β†’ViCLIP-OT is the first foundation vision-language model specifically optimized for Vietnamese image-text retrieval tasks.
  • β†’The model improves upon CLIP by 5.75 percentage points on UIT-OpenViIC and 11.72 percentage points in zero-shot evaluation on Crossmodal-3600.
  • β†’SIGROT loss integration enhances global cross-modal consistency and reduces modality gap issues in low-resource language settings.
  • β†’Extensive testing on three Vietnamese benchmarks demonstrates consistent outperformance in both in-domain and zero-shot scenarios.
  • β†’The approach provides a scalable strategy for cross-modal retrieval systems in underrepresented linguistic contexts beyond Vietnamese.
Read Original β†’via arXiv – CS AI
Act on this with AI
Stay ahead of the market.
Connect your wallet to an AI agent. It reads balances, proposes swaps and bridges across 15 chains β€” you keep full control of your keys.
Connect Wallet to AI β†’How it works
Related Articles