160 articles tagged with #vision-language-models. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.
AINeutralarXiv โ CS AI ยท Mar 96/10
๐ง Researchers analyzed Vision-Language Models (VLMs) used in automated driving to understand why they fail on simple visual tasks. They identified two failure modes: perceptual failure where visual information isn't encoded, and cognitive failure where information is present but not properly aligned with language semantics.
AIBullisharXiv โ CS AI ยท Mar 96/10
๐ง Researchers introduce HiPP-Prune, a new framework for efficiently compressing vision-language models while maintaining performance and reducing hallucinations. The hierarchical approach uses preference-based pruning that considers multiple objectives including task utility, visual grounding, and compression efficiency.
AIBullisharXiv โ CS AI ยท Mar 96/10
๐ง Researchers developed DEX-AR, a new explainability method for autoregressive Vision-Language Models that generates 2D heatmaps to understand how these AI systems make decisions. The method addresses challenges in interpreting modern VLMs by analyzing token-by-token generation and visual-textual interactions, showing improved performance across multiple benchmarks.
๐ข Perplexity
AIBullisharXiv โ CS AI ยท Mar 96/10
๐ง Researchers developed MAP (Map-Level Attention Processing), a training-free method to reduce hallucinations in Large Vision-Language Models by treating hidden states as 2D semantic maps. The approach uses attention-based operations to better leverage factual information and improve consistency between generated text and visual inputs.
AIBullisharXiv โ CS AI ยท Mar 96/10
๐ง Researchers introduced VLMQ, a post-training quantization framework specifically designed for vision-language models that addresses visual over-representation and modality gaps. The method achieves significant performance improvements, including 16.45% better results on MME-RealWorld under 2-bit quantization compared to existing approaches.
AIBullisharXiv โ CS AI ยท Mar 96/10
๐ง Researchers introduce 3DThinker, a new framework that enables vision-language models to perform 3D spatial reasoning from limited 2D views without requiring 3D training data. The system uses a two-stage training approach to align 3D representations with foundation models and demonstrates superior performance across multiple benchmarks.
AIBullisharXiv โ CS AI ยท Mar 96/10
๐ง Researchers present CASA, a new approach using cross-attention over self-attention for vision-language models that maintains competitive performance while significantly reducing memory and compute costs. The method shows particular advantages for real-time applications like video captioning by avoiding expensive token insertion into language model streams.
AIBullisharXiv โ CS AI ยท Mar 66/10
๐ง Researchers introduce DP-MTV, the first framework enabling privacy-preserving multimodal in-context learning for vision-language models using differential privacy. The system allows processing hundreds of demonstrations while maintaining formal privacy guarantees, achieving competitive performance on benchmarks like VizWiz with only minimal accuracy loss.
AIBullisharXiv โ CS AI ยท Mar 66/10
๐ง Researchers propose AoD-IP, a new framework for protecting intellectual property in vision-language models through dynamic authorization and legality-aware assessment. The system allows flexible, user-controlled authorization that can adapt to changing deployment scenarios while preventing unauthorized use of valuable AI models.
AINeutralarXiv โ CS AI ยท Mar 66/10
๐ง Researchers found that vision-language models like Qwen-VL and LLaVA compute object affordances in highly context-dependent ways, with over 90% of scene descriptions changing based on contextual priming. The study reveals that these AI models don't have fixed understanding of objects but dynamically interpret them based on different situational contexts.
AIBullisharXiv โ CS AI ยท Mar 55/10
๐ง Researchers developed GarmentPile++, an AI pipeline that uses vision-language models to retrieve individual garments from cluttered piles following natural language instructions. The system integrates visual affordance perception with dual-arm robotics to handle complex garment manipulation tasks in real-world home assistant applications.
AIBullisharXiv โ CS AI ยท Mar 45/104
๐ง Researchers have developed VL-KGE, a new framework that combines Vision-Language Models with Knowledge Graph Embeddings to better process multimodal knowledge graphs. The approach addresses limitations in existing methods by enabling stronger cross-modal alignment and more unified representations across diverse data types.
$LINK
AINeutralarXiv โ CS AI ยท Mar 36/104
๐ง Researchers introduced SpinBench, a new benchmark for evaluating spatial reasoning abilities in vision language models (VLMs), focusing on perspective taking and viewpoint transformations. Testing 43 state-of-the-art VLMs revealed systematic weaknesses including strong egocentric bias and poor rotational understanding, with human performance significantly outpacing AI models at 91.2% accuracy.
AIBullisharXiv โ CS AI ยท Mar 36/102
๐ง Researchers developed COMRES-VLM, a new framework using Vision Language Models to coordinate multiple robots for exploration and object search in indoor environments. The system achieved 10.2% faster exploration and 55.7% higher search efficiency compared to existing methods, while enabling natural language-based human guidance.
AIBullisharXiv โ CS AI ยท Mar 36/104
๐ง Researchers introduce AdaptVision, a new Vision-Language Model that reduces computational overhead by adaptively determining the minimum visual tokens needed per sample. The model uses a coarse-to-fine approach with reinforcement learning to balance accuracy and efficiency, achieving superior performance while consuming fewer visual tokens than existing methods.
AIBullisharXiv โ CS AI ยท Mar 37/108
๐ง Researchers propose a training-free paradigm for empowering Vision-Language Models with multi-modal search capabilities through cross-modal model merging. The approach uses Optimal Brain Merging (OBM) to combine text-based search agents with base VLMs without requiring expensive supervised training or reinforcement learning.
AIBullisharXiv โ CS AI ยท Mar 36/1010
๐ง Researchers developed ST-Lite, a training-free KV cache compression framework that accelerates GUI agents by 2.45x while using only 10-20% of the cache budget. The solution addresses memory and latency constraints in Vision-Language Models for autonomous GUI interactions through specialized attention pattern optimization.
AIBullisharXiv โ CS AI ยท Mar 36/106
๐ง Researchers developed a new token reduction method for hybrid vision-language models that process long videos, achieving 3.8-4.2x speedup while retaining only 25% of visual tokens. The approach uses progressive reduction and unified scoring for both attention and Mamba blocks, maintaining near-baseline accuracy on long-context video benchmarks.
$NEAR
AIBullisharXiv โ CS AI ยท Mar 36/106
๐ง Researchers developed a Vision-Language Model capable of estimating 3D object positions from monocular RGB images for human-robot interaction. The model achieved a median accuracy of 13mm and can make acceptable predictions for robot interaction in 25% of cases, representing a five-fold improvement over baseline methods.
AINeutralarXiv โ CS AI ยท Mar 37/107
๐ง Researchers identify a critical flaw in Vision-Language Model evaluation for radiology, where high benchmark scores mask models' failure to generate clinically specific terminology. They propose new metrics including Clinical Association Displacement (CAD) to measure bias and clinical signal loss across demographic groups.
AIBullisharXiv โ CS AI ยท Mar 36/108
๐ง Researchers introduce Multi-View Video Reward Shaping (MVR), a new reinforcement learning framework that uses multi-viewpoint video analysis and vision-language models to improve reward design for complex AI tasks. The system addresses limitations of single-image approaches by analyzing dynamic motions across multiple camera angles, showing improved performance on humanoid locomotion and manipulation tasks.
AINeutralarXiv โ CS AI ยท Mar 36/103
๐ง Researchers introduce OmniSpatial, a comprehensive benchmark for testing spatial reasoning capabilities in vision-language models (VLMs). The benchmark reveals significant limitations in both open and closed-source VLMs across four major spatial reasoning categories, with over 8,400 question-answer pairs testing advanced cognitive abilities.
$NEAR
AIBullisharXiv โ CS AI ยท Mar 26/1015
๐ง Researchers introduce DesignSense-10k, a dataset of 10,235 human-annotated preference pairs for evaluating graphic layout generation, along with DesignSense, a specialized AI model that outperforms existing models by 54.6% in layout quality assessment. The framework addresses the gap between AI-generated layouts and human aesthetic preferences, showing practical improvements in layout generation through reinforcement learning.
AIBullisharXiv โ CS AI ยท Mar 26/1013
๐ง Researchers developed MedMAP, a Medical Modality-Aware Pretraining framework that enhances vision-language models for 3D MRI multi-organ abnormality detection. The framework addresses challenges in modality-specific alignment and cross-modal feature fusion, demonstrating superior performance on a curated dataset of 7,392 3D MRI volume-report pairs.
AIBullisharXiv โ CS AI ยท Mar 26/109
๐ง Researchers propose ProtoDCS, a new framework for robust test-time adaptation of Vision-Language Models in open-set scenarios. The method uses Gaussian Mixture Model verification and uncertainty-aware learning to better handle distribution shifts while maintaining computational efficiency.