y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto

#multimodal-llm News & Analysis

42 articles tagged with #multimodal-llm. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.

42 articles
AIBullisharXiv – CS AI Β· 1d ago7/10
🧠

DocSeeker: Structured Visual Reasoning with Evidence Grounding for Long Document Understanding

Researchers introduce DocSeeker, a multimodal AI system designed to improve long document understanding by implementing structured analysis, localization, and reasoning workflows. The breakthrough addresses critical limitations in existing large language models that struggle with lengthy documents due to high noise levels and weak training signals, achieving superior performance on both short and ultra-long documents.

AIBullisharXiv – CS AI Β· 2d ago7/10
🧠

LAST: Leveraging Tools as Hints to Enhance Spatial Reasoning for Multimodal Large Language Models

Researchers introduce LAST, a framework that enhances multimodal large language models' spatial reasoning by integrating specialized vision tools through an interactive sandbox interface. The approach achieves ~20% performance improvements over baseline models and outperforms proprietary closed-source LLMs on spatial reasoning tasks by converting complex tool outputs into consumable hints for language models.

AIBullisharXiv – CS AI Β· 2d ago7/10
🧠

MM-LIMA: Less Is More for Alignment in Multi-Modal Datasets

MM-LIMA demonstrates that multimodal large language models can achieve superior performance using only 200 high-quality instruction examplesβ€”6% of the data used in comparable systems. Researchers developed quality metrics and an automated data selector to filter vision-language datasets, showing that strategic data curation outweighs raw dataset size in model alignment.

AIBullisharXiv – CS AI Β· 2d ago7/10
🧠

Adapting 2D Multi-Modal Large Language Model for 3D CT Image Analysis

Researchers propose a method to adapt 2D multimodal large language models for 3D medical imaging analysis, introducing a Text-Guided Hierarchical Mixture of Experts framework that enables task-specific feature extraction. The approach demonstrates improved performance on medical report generation and visual question answering tasks while reusing pre-trained parameters from 2D models.

AIBullisharXiv – CS AI Β· 6d ago7/10
🧠

Q-Zoom: Query-Aware Adaptive Perception for Efficient Multimodal Large Language Models

Q-Zoom is a new framework that improves the efficiency of multimodal large language models by intelligently processing high-resolution visual inputs. Using adaptive query-aware perception, the system achieves 2.5-4.4x faster inference speeds on document and high-resolution tasks while maintaining or exceeding baseline accuracy across multiple MLLM architectures.

AIBullisharXiv – CS AI Β· 6d ago7/10
🧠

Faithful-First Reasoning, Planning, and Acting for Multimodal LLMs

Researchers propose Faithful-First RPA, a framework that improves multimodal AI reasoning by prioritizing faithfulness to visual evidence. The method uses FaithEvi for supervision and FaithAct for execution, achieving up to 24% improvement in perceptual faithfulness without sacrificing task accuracy.

AIBullisharXiv – CS AI Β· Apr 77/10
🧠

Stabilizing Unsupervised Self-Evolution of MLLMs via Continuous Softened Retracing reSampling

Researchers propose Continuous Softened Retracing reSampling (CSRS) to improve the self-evolution of Multimodal Large Language Models by addressing biases in feedback mechanisms. The method uses continuous reward signals instead of binary rewards and achieves state-of-the-art results on mathematical reasoning benchmarks like MathVision using Qwen2.5-VL-7B.

AINeutralarXiv – CS AI Β· Mar 177/10
🧠

How Do Medical MLLMs Fail? A Study on Visual Grounding in Medical Images

Researchers identified that medical multimodal large language models (MLLMs) fail primarily due to inadequate visual grounding capabilities when analyzing medical images, unlike their success with natural scenes. They developed VGMED evaluation dataset and proposed VGRefine method, achieving state-of-the-art performance across 6 medical visual question-answering benchmarks without additional training.

AIBullisharXiv – CS AI Β· Mar 167/10
🧠

Cost-Efficient Multimodal LLM Inference via Cross-Tier GPU Heterogeneity

Researchers developed HeteroServe, a system that optimizes multimodal large language model inference by partitioning vision encoding and language generation across different GPU tiers. The approach reduces data transfer requirements and achieves 31-40% cost savings while improving throughput by up to 54% compared to existing systems.

AIBullisharXiv – CS AI Β· Mar 117/10
🧠

Meissa: Multi-modal Medical Agentic Intelligence

Researchers have developed Meissa, a lightweight 4B-parameter medical AI model that brings advanced agentic capabilities offline for healthcare applications. The system matches frontier models like GPT in medical benchmarks while operating with 25x fewer parameters and 22x lower latency, addressing privacy and cost concerns in clinical settings.

🧠 Gemini
AINeutralarXiv – CS AI Β· Mar 57/10
🧠

SpatialBench: Benchmarking Multimodal Large Language Models for Spatial Cognition

Researchers introduce SpatialBench, a comprehensive benchmark for evaluating spatial cognition in multimodal large language models (MLLMs). The framework reveals that while MLLMs excel at perceptual grounding, they struggle with symbolic reasoning, causal inference, and planning compared to humans who demonstrate more goal-directed spatial abstraction.

AIBullisharXiv – CS AI Β· Mar 47/103
🧠

OptMerge: Unifying Multimodal LLM Capabilities and Modalities via Model Merging

Researchers introduce OptMerge, a new benchmark and method for combining multiple expert Multimodal Large Language Models (MLLMs) into single, more capable models without requiring additional training data. The approach achieves 2.48% average performance gains while reducing storage and serving costs by merging models across different modalities like vision, audio, and video.

AIBullisharXiv – CS AI Β· Mar 37/105
🧠

Vid-LLM: A Compact Video-based 3D Multimodal LLM with Reconstruction-Reasoning Synergy

Researchers propose Vid-LLM, a new video-based 3D multimodal large language model that processes video inputs without requiring external 3D data for scene understanding. The model uses a Cross-Task Adapter module and Metric Depth Model to integrate geometric cues and maintain consistency across 3D tasks like question answering and visual grounding.

AIBullisharXiv – CS AI Β· Mar 37/104
🧠

SpiroLLM: Finetuning Pretrained LLMs to Understand Spirogram Time Series with Clinical Validation in COPD Reporting

Researchers developed SpiroLLM, the first multimodal large language model capable of understanding spirogram time series data for COPD diagnosis. Using data from 234,028 UK Biobank individuals, the model achieved 0.8977 diagnostic AUROC and maintained 100% valid response rate even with missing data, far outperforming text-only models.

AINeutralarXiv – CS AI Β· Feb 277/106
🧠

ProactiveMobile: A Comprehensive Benchmark for Boosting Proactive Intelligence on Mobile Devices

Researchers introduce ProactiveMobile, a new benchmark for developing AI agents that can proactively anticipate user needs on mobile devices rather than just responding to commands. The benchmark includes over 3,600 test instances across 14 scenarios, with current models achieving low success rates, indicating significant room for improvement in proactive AI capabilities.

AINeutralarXiv – CS AI Β· Feb 277/106
🧠

Modality Collapse as Mismatched Decoding: Information-Theoretic Limits of Multimodal LLMs

Researchers identified a fundamental limitation in multimodal LLMs where decoders trained on text cannot effectively utilize non-text information like speaker identity or visual textures, despite this information being preserved through all model layers. The study demonstrates this 'modality collapse' is due to decoder design rather than encoding failures, with experiments showing targeted training can improve specific modality accessibility.

AIBullisharXiv – CS AI Β· 1d ago6/10
🧠

CLASP: Class-Adaptive Layer Fusion and Dual-Stage Pruning for Multimodal Large Language Models

Researchers introduce CLASP, a token reduction framework that optimizes Multimodal Large Language Models by intelligently pruning visual tokens through class-adaptive layer fusion and dual-stage pruning. The approach addresses computational inefficiency in MLLMs while maintaining performance across diverse benchmarks and architectures.

AINeutralarXiv – CS AI Β· 2d ago6/10
🧠

Diffusion-CAM: Faithful Visual Explanations for dMLLMs

Researchers introduce Diffusion-CAM, a novel interpretability method designed specifically for diffusion-based Multimodal Large Language Models (dMLLMs). Unlike existing visualization techniques optimized for sequential models, this approach accounts for the parallel denoising process inherent to diffusion architectures, achieving superior localization accuracy and visual fidelity in model explanations.

AIBullisharXiv – CS AI Β· 2d ago6/10
🧠

MCERF: Advancing Multimodal LLM Evaluation of Engineering Documentation with Enhanced Retrieval

Researchers introduce MCERF, a multimodal retrieval framework that combines vision-language models with LLM reasoning to improve question-answering from engineering documents. The system achieves a 41.1% relative accuracy improvement over baseline RAG systems by handling complex multimodal content like tables, diagrams, and dense technical text through adaptive routing and hybrid retrieval strategies.

AINeutralarXiv – CS AI Β· 3d ago6/10
🧠

Investigating Multimodal Large Language Models to Support Usability Evaluation

Researchers investigate how multimodal large language models (MLLMs) can assist with usability evaluation of user interfaces by analyzing text and visual context together. The study compares MLLM-generated assessments against expert evaluations, finding that these models can effectively prioritize usability issues by severity and offer complementary insights to traditional resource-intensive evaluation methods.

AINeutralarXiv – CS AI Β· 6d ago6/10
🧠

Q-Probe: Scaling Image Quality Assessment to High Resolution via Context-Aware Agentic Probing

Q-Probe introduces a novel agentic framework for scaling image quality assessment to high-resolution images by addressing limitations in existing reinforcement learning approaches. The research presents Vista-Bench, a new benchmark for fine-grained degradation analysis, and demonstrates state-of-the-art performance across multiple resolution scales through context-aware probing mechanisms.

AIBullisharXiv – CS AI Β· Apr 66/10
🧠

ForgeryGPT: A Multimodal LLM for Interpretable Image Forgery Detection and Localization

Researchers have developed ForgeryGPT, a new multimodal AI framework that can detect, localize, and explain image forgeries through natural language interaction. The system combines advanced computer vision techniques with large language models to provide interpretable analysis of tampered images, addressing limitations in current forgery detection methods.

🧠 GPT-4
AINeutralarXiv – CS AI Β· Mar 276/10
🧠

ReLope: KL-Regularized LoRA Probes for Multimodal LLM Routing

Researchers introduce ReLope, a new routing method for multimodal large language models that uses KL-regularized LoRA probes and attention mechanisms to improve cost-performance balance. The method addresses the challenge of degraded probe performance when visual inputs are added to text-only LLMs.

Page 1 of 2Next β†’