160 articles tagged with #vision-language-models. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.
AIBullisharXiv โ CS AI ยท 6d ago6/10
๐ง Researchers developed a multimodal generative AI pipeline that creates synthetic residential building datasets from publicly available county records and images, addressing critical data scarcity challenges in building energy modeling. The system achieves over 65% overlap with national reference data, enabling scalable energy research and urban simulations without relying on expensive or privacy-restricted datasets.
AIBullisharXiv โ CS AI ยท 6d ago6/10
๐ง Researchers introduce PyFi, a framework enabling vision language models to understand financial images through progressive reasoning chains, backed by a 600K synthetic dataset organized as a reasoning pyramid. The approach uses adversarial agents to automatically generate training data without human annotation, achieving up to 19.52% accuracy improvements on fine-tuned models.
AIBullisharXiv โ CS AI ยท 6d ago6/10
๐ง Researchers propose a Self-Validation Framework to address object hallucination in Large Vision Language Models (LVLMs), where models generate descriptions of non-existent objects in images. The training-free approach validates object existence through language-prior-free verification and achieves 65.6% improvement on benchmark metrics, suggesting a novel path to enhance LVLM reliability without additional training.
AIBearisharXiv โ CS AI ยท Apr 76/10
๐ง Research reveals that Vision Language Models (VLMs) progressively lose visual grounding during reasoning tasks, creating dangerous low-entropy predictions that appear confident but lack visual evidence. The study found attention to visual evidence drops by over 50% during reasoning across multiple benchmarks, requiring task-aware monitoring for safe AI deployment.
AIBullisharXiv โ CS AI ยท Apr 76/10
๐ง Researchers developed a new method to reduce hallucinations in Large Vision-Language Models (LVLMs) by identifying a three-phase attention structure in vision processing and selectively suppressing low-attention tokens during the focus phase. The training-free approach significantly reduces object hallucinations while maintaining caption quality with minimal inference latency impact.
AINeutralarXiv โ CS AI ยท Apr 76/10
๐ง Researchers developed an AI framework using reinforcement learning to automatically discover failure modes in vision-language models without human intervention. The system trains a questioner agent that generates adaptive queries to expose weaknesses, successfully identifying 36 novel failure modes across various VLM combinations.
AIBearisharXiv โ CS AI ยท Apr 66/10
๐ง Researchers introduce VLM-UnBench, the first benchmark for evaluating training-free visual concept unlearning in Vision Language Models. The study reveals that realistic prompts fail to genuinely remove sensitive or copyrighted visual concepts, with meaningful suppression only occurring under oracle conditions that explicitly disclose target concepts.
AIBullisharXiv โ CS AI ยท Mar 266/10
๐ง Researchers introduce Learning to Guide (LTG), a new AI framework where machines provide interpretable guidance to human decision-makers rather than making automated decisions. The SLOG approach transforms vision-language models into guidance generators using human feedback, showing promise in medical diagnosis applications.
AIBullisharXiv โ CS AI ยท Mar 266/10
๐ง Researchers introduce ELITE, a new framework that enables AI embodied agents to learn from their own experiences and transfer knowledge to similar tasks. The system addresses failures in vision-language models when performing complex physical tasks by using self-reflective knowledge construction and intent-aware retrieval mechanisms.
AINeutralarXiv โ CS AI ยท Mar 266/10
๐ง Researchers investigated whether Vision-Language Models (VLMs) can reason robustly under distribution shifts and found that fine-tuned VLMs achieve high accuracy in-distribution but fail to generalize. They propose VLC, a neuro-symbolic method combining VLM-based concept recognition with circuit-based symbolic reasoning that demonstrates consistent performance under covariate shifts.
AINeutralarXiv โ CS AI ยท Mar 266/10
๐ง Researchers identify 'multi-view hallucination' as a major problem in large vision-language models (LVLMs), where these AI systems confuse visual information from different viewpoints or instances. They created MVH-Bench benchmark and developed Reference Shift Contrastive Decoding (RSCD) technique, which improved performance by up to 34.6 points without requiring model retraining.
AIBullisharXiv โ CS AI ยท Mar 266/10
๐ง Researchers introduced LensWalk, an agentic AI framework that enables Large Language Models to actively control their visual observation of videos through dynamic temporal sampling. The system uses a reason-plan-observe loop to progressively gather evidence, achieving 5% accuracy improvements on challenging video benchmarks without requiring model fine-tuning.
AIBullisharXiv โ CS AI ยท Mar 176/10
๐ง Researchers introduce Geo-ADAPT, a new AI framework using Vision-Language Models for image geo-localization that adapts reasoning depth based on image complexity. The system uses an Optimized Locatability Score and specialized dataset to achieve state-of-the-art performance while reducing AI hallucinations.
AIBullisharXiv โ CS AI ยท Mar 176/10
๐ง Researchers have introduced UVLM (Universal Vision-Language Model Loader), a Google Colab-based framework that provides a unified interface for loading, configuring, and benchmarking multiple Vision-Language Model architectures. The framework currently supports LLaVA-NeXT and Qwen2.5-VL models and enables researchers to compare different VLMs using identical evaluation protocols on custom image analysis tasks.
AIBullisharXiv โ CS AI ยท Mar 176/10
๐ง Researchers introduce SmoothVLA, a new reinforcement learning framework that improves robot control by optimizing both task performance and motion smoothness. The system addresses the trade-off between stability and exploration in Vision-Language-Action models, achieving 13.8% better smoothness than standard RL methods.
AIBullisharXiv โ CS AI ยท Mar 176/10
๐ง Researchers propose OxyGen, a unified KV cache management system for Vision-Language-Action Models that enables efficient multi-task parallelism in embodied AI agents. The system achieves up to 3.7x speedup by sharing computational resources across tasks and eliminating redundant processing of shared observations.
AIBullisharXiv โ CS AI ยท Mar 176/10
๐ง Researchers propose 'Two Birds, One Projection,' a new inference-time defense method for Large Vision-Language Models that simultaneously improves both safety and utility performance. The method addresses modality-induced bias by projecting cross-modal features onto the null space of identified bias directions, breaking the traditional safety-utility tradeoff.
AIBullisharXiv โ CS AI ยท Mar 176/10
๐ง Researchers propose MA-VLCM, a framework that uses pretrained vision-language models as centralized critics in multi-agent reinforcement learning instead of learning critics from scratch. This approach significantly improves sample efficiency and enables zero-shot generalization while producing compact policies suitable for resource-constrained robots.
AIBullisharXiv โ CS AI ยท Mar 176/10
๐ง Researchers introduce VisionZip, a new method that reduces redundant visual tokens in vision-language models while maintaining performance. The technique improves inference speed by 8x and achieves 5% better performance than existing methods by selecting only informative tokens for processing.
AIBullisharXiv โ CS AI ยท Mar 176/10
๐ง Researchers developed VLAD-Grasp, a training-free robotic grasping system that uses vision-language models to detect graspable objects without requiring curated datasets. The system achieves competitive performance with state-of-the-art methods on benchmark datasets and demonstrates zero-shot generalization to real-world robotic manipulation tasks.
AIBullisharXiv โ CS AI ยท Mar 116/10
๐ง Researchers propose CVS, a training-free method for selecting high-quality vision-language training data that requires genuine cross-modal reasoning. The method achieves better performance using only 10-15% of data compared to full dataset training, while reducing computational costs by up to 44%.
AIBullisharXiv โ CS AI ยท Mar 116/10
๐ง Researchers introduce ARAS400k, a large-scale remote sensing dataset containing 400k images (100k real, 300k synthetic) with segmentation maps and descriptions. The study demonstrates that combining real and synthetic data consistently outperforms training on real data alone for semantic segmentation and image captioning tasks.
AIBullisharXiv โ CS AI ยท Mar 116/10
๐ง Researchers propose Ego, a new method for personalizing vision-language AI models without requiring additional training stages. The approach extracts visual tokens using the model's internal attention mechanisms to create concept memories, enabling personalized responses across single-concept, multi-concept, and video scenarios.
AIBullisharXiv โ CS AI ยท Mar 96/10
๐ง Researchers developed E-AdaPrune, an energy-driven adaptive pruning framework that optimizes Vision-Language Models by dynamically allocating visual tokens based on image information density. The method shows up to 0.6% average improvement across benchmarks, with a notable 5.1% boost on reasoning tasks, while adding only 8ms latency per image.
AINeutralarXiv โ CS AI ยท Mar 96/10
๐ง Researchers have identified a critical failure mode in Vision-Language-Action (VLA) robotic models called 'linguistic blindness,' where robots prioritize visual cues over language instructions when they contradict. They developed ICBench benchmark and proposed IGAR, a train-free solution that recalibrates attention to restore language instruction influence without requiring model retraining.