42 articles tagged with #multimodal-llm. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.
AIBullisharXiv β CS AI Β· 1d ago7/10
π§ Researchers introduce DocSeeker, a multimodal AI system designed to improve long document understanding by implementing structured analysis, localization, and reasoning workflows. The breakthrough addresses critical limitations in existing large language models that struggle with lengthy documents due to high noise levels and weak training signals, achieving superior performance on both short and ultra-long documents.
AIBullisharXiv β CS AI Β· 2d ago7/10
π§ Researchers introduce LAST, a framework that enhances multimodal large language models' spatial reasoning by integrating specialized vision tools through an interactive sandbox interface. The approach achieves ~20% performance improvements over baseline models and outperforms proprietary closed-source LLMs on spatial reasoning tasks by converting complex tool outputs into consumable hints for language models.
AIBullisharXiv β CS AI Β· 2d ago7/10
π§ MM-LIMA demonstrates that multimodal large language models can achieve superior performance using only 200 high-quality instruction examplesβ6% of the data used in comparable systems. Researchers developed quality metrics and an automated data selector to filter vision-language datasets, showing that strategic data curation outweighs raw dataset size in model alignment.
AIBullisharXiv β CS AI Β· 2d ago7/10
π§ Researchers propose a method to adapt 2D multimodal large language models for 3D medical imaging analysis, introducing a Text-Guided Hierarchical Mixture of Experts framework that enables task-specific feature extraction. The approach demonstrates improved performance on medical report generation and visual question answering tasks while reusing pre-trained parameters from 2D models.
AIBullisharXiv β CS AI Β· 6d ago7/10
π§ Q-Zoom is a new framework that improves the efficiency of multimodal large language models by intelligently processing high-resolution visual inputs. Using adaptive query-aware perception, the system achieves 2.5-4.4x faster inference speeds on document and high-resolution tasks while maintaining or exceeding baseline accuracy across multiple MLLM architectures.
AIBullisharXiv β CS AI Β· 6d ago7/10
π§ Researchers propose Faithful-First RPA, a framework that improves multimodal AI reasoning by prioritizing faithfulness to visual evidence. The method uses FaithEvi for supervision and FaithAct for execution, achieving up to 24% improvement in perceptual faithfulness without sacrificing task accuracy.
AIBullisharXiv β CS AI Β· Apr 77/10
π§ Researchers propose Continuous Softened Retracing reSampling (CSRS) to improve the self-evolution of Multimodal Large Language Models by addressing biases in feedback mechanisms. The method uses continuous reward signals instead of binary rewards and achieves state-of-the-art results on mathematical reasoning benchmarks like MathVision using Qwen2.5-VL-7B.
AINeutralarXiv β CS AI Β· Mar 177/10
π§ Researchers identified that medical multimodal large language models (MLLMs) fail primarily due to inadequate visual grounding capabilities when analyzing medical images, unlike their success with natural scenes. They developed VGMED evaluation dataset and proposed VGRefine method, achieving state-of-the-art performance across 6 medical visual question-answering benchmarks without additional training.
AIBullisharXiv β CS AI Β· Mar 177/10
π§ Researchers developed SToRM, a new framework that reduces computational costs for autonomous driving systems using multi-modal large language models by up to 30x while maintaining performance. The system uses supervised token reduction techniques to enable real-time end-to-end driving on standard GPUs without sacrificing safety or accuracy.
AIBullisharXiv β CS AI Β· Mar 167/10
π§ Researchers developed HeteroServe, a system that optimizes multimodal large language model inference by partitioning vision encoding and language generation across different GPU tiers. The approach reduces data transfer requirements and achieves 31-40% cost savings while improving throughput by up to 54% compared to existing systems.
AIBullisharXiv β CS AI Β· Mar 117/10
π§ Researchers have developed Meissa, a lightweight 4B-parameter medical AI model that brings advanced agentic capabilities offline for healthcare applications. The system matches frontier models like GPT in medical benchmarks while operating with 25x fewer parameters and 22x lower latency, addressing privacy and cost concerns in clinical settings.
π§ Gemini
AINeutralarXiv β CS AI Β· Mar 57/10
π§ Researchers introduce SpatialBench, a comprehensive benchmark for evaluating spatial cognition in multimodal large language models (MLLMs). The framework reveals that while MLLMs excel at perceptual grounding, they struggle with symbolic reasoning, causal inference, and planning compared to humans who demonstrate more goal-directed spatial abstraction.
AIBullisharXiv β CS AI Β· Mar 47/103
π§ Researchers introduce OptMerge, a new benchmark and method for combining multiple expert Multimodal Large Language Models (MLLMs) into single, more capable models without requiring additional training data. The approach achieves 2.48% average performance gains while reducing storage and serving costs by merging models across different modalities like vision, audio, and video.
AIBullisharXiv β CS AI Β· Mar 37/105
π§ Researchers propose Vid-LLM, a new video-based 3D multimodal large language model that processes video inputs without requiring external 3D data for scene understanding. The model uses a Cross-Task Adapter module and Metric Depth Model to integrate geometric cues and maintain consistency across 3D tasks like question answering and visual grounding.
AIBullisharXiv β CS AI Β· Mar 37/104
π§ Researchers developed SpiroLLM, the first multimodal large language model capable of understanding spirogram time series data for COPD diagnosis. Using data from 234,028 UK Biobank individuals, the model achieved 0.8977 diagnostic AUROC and maintained 100% valid response rate even with missing data, far outperforming text-only models.
AINeutralarXiv β CS AI Β· Feb 277/106
π§ Researchers introduce ProactiveMobile, a new benchmark for developing AI agents that can proactively anticipate user needs on mobile devices rather than just responding to commands. The benchmark includes over 3,600 test instances across 14 scenarios, with current models achieving low success rates, indicating significant room for improvement in proactive AI capabilities.
AINeutralarXiv β CS AI Β· Feb 277/106
π§ Researchers identified a fundamental limitation in multimodal LLMs where decoders trained on text cannot effectively utilize non-text information like speaker identity or visual textures, despite this information being preserved through all model layers. The study demonstrates this 'modality collapse' is due to decoder design rather than encoding failures, with experiments showing targeted training can improve specific modality accessibility.
AIBullisharXiv β CS AI Β· 1d ago6/10
π§ Researchers introduce CLASP, a token reduction framework that optimizes Multimodal Large Language Models by intelligently pruning visual tokens through class-adaptive layer fusion and dual-stage pruning. The approach addresses computational inefficiency in MLLMs while maintaining performance across diverse benchmarks and architectures.
AINeutralarXiv β CS AI Β· 2d ago6/10
π§ Researchers introduce Diffusion-CAM, a novel interpretability method designed specifically for diffusion-based Multimodal Large Language Models (dMLLMs). Unlike existing visualization techniques optimized for sequential models, this approach accounts for the parallel denoising process inherent to diffusion architectures, achieving superior localization accuracy and visual fidelity in model explanations.
AIBullisharXiv β CS AI Β· 2d ago6/10
π§ Researchers introduce MCERF, a multimodal retrieval framework that combines vision-language models with LLM reasoning to improve question-answering from engineering documents. The system achieves a 41.1% relative accuracy improvement over baseline RAG systems by handling complex multimodal content like tables, diagrams, and dense technical text through adaptive routing and hybrid retrieval strategies.
AINeutralarXiv β CS AI Β· 3d ago6/10
π§ Researchers investigate how multimodal large language models (MLLMs) can assist with usability evaluation of user interfaces by analyzing text and visual context together. The study compares MLLM-generated assessments against expert evaluations, finding that these models can effectively prioritize usability issues by severity and offer complementary insights to traditional resource-intensive evaluation methods.
AINeutralarXiv β CS AI Β· 6d ago6/10
π§ Q-Probe introduces a novel agentic framework for scaling image quality assessment to high-resolution images by addressing limitations in existing reinforcement learning approaches. The research presents Vista-Bench, a new benchmark for fine-grained degradation analysis, and demonstrates state-of-the-art performance across multiple resolution scales through context-aware probing mechanisms.
AINeutralarXiv β CS AI Β· Apr 76/10
π§ Researchers identify critical limitations in current Multimodal Large Language Models' ability to understand physics and physical world dynamics. They propose Scene Dynamic Field (SDF), a new approach using physics simulators that achieves up to 20.7% performance improvements on fluid dynamics tasks.
AIBullisharXiv β CS AI Β· Apr 66/10
π§ Researchers have developed ForgeryGPT, a new multimodal AI framework that can detect, localize, and explain image forgeries through natural language interaction. The system combines advanced computer vision techniques with large language models to provide interpretable analysis of tampered images, addressing limitations in current forgery detection methods.
π§ GPT-4
AINeutralarXiv β CS AI Β· Mar 276/10
π§ Researchers introduce ReLope, a new routing method for multimodal large language models that uses KL-regularized LoRA probes and attention mechanisms to improve cost-performance balance. The method addresses the challenge of degraded probe performance when visual inputs are added to text-only LLMs.