34 articles tagged with #mllm. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.
AIBullisharXiv โ CS AI ยท Apr 77/10
๐ง Researchers introduce V-Reflection, a new framework that transforms Multimodal Large Language Models (MLLMs) from passive observers to active interrogators through a 'think-then-look' mechanism. The approach addresses perception-related hallucinations in fine-grained tasks by allowing models to dynamically re-examine visual details during reasoning, showing significant improvements across six perception-intensive benchmarks.
AINeutralarXiv โ CS AI ยท Apr 67/10
๐ง Researchers propose the Hallucination-as-Cue Framework to analyze reinforcement learning's effectiveness in training multimodal AI models. The study reveals that RL training can improve reasoning performance even under hallucination-inductive conditions, challenging assumptions about how these models learn from visual information.
AIBearisharXiv โ CS AI ยท Mar 267/10
๐ง Research reveals that multimodal large language models (MLLMs) pose greater safety risks than diffusion models for image generation, producing more unsafe content and creating images that are harder for detection systems to identify. The enhanced semantic understanding capabilities of MLLMs, while more powerful, enable them to interpret complex prompts that lead to dangerous outputs including fake image synthesis.
AIBullisharXiv โ CS AI ยท Mar 177/10
๐ง Researchers developed AD-Copilot, a specialized multimodal AI assistant for industrial anomaly detection that outperforms existing models and even human experts. The system uses a novel visual comparison approach and achieved 82.3% accuracy on benchmarks, representing up to 3.35x improvement over baselines.
๐ข Microsoft
AIBullisharXiv โ CS AI ยท Mar 177/10
๐ง Researchers introduce PRIMO R1, a 7B parameter AI framework that transforms video MLLMs from passive observers into active critics for robotic manipulation tasks. The system uses reinforcement learning to achieve 50% better accuracy than specialized baselines and outperforms 72B-scale models, establishing state-of-the-art performance on the RoboFail benchmark.
๐ข OpenAI๐ง o1
AINeutralarXiv โ CS AI ยท Mar 117/10
๐ง Researchers introduce OOD-MMSafe, a new benchmark revealing that current Multimodal Large Language Models fail to identify hidden safety risks up to 67.5% of the time. They developed CASPO framework which dramatically reduces failure rates to under 8% for risk identification in consequence-driven safety scenarios.
AIBullisharXiv โ CS AI ยท Mar 56/10
๐ง Researchers developed EvoPrune, a new method that prunes visual tokens during the encoding stage of Multimodal Large Language Models (MLLMs) rather than after encoding. The technique achieves 2x inference speedup with less than 1% performance loss on video datasets, addressing efficiency bottlenecks in AI models processing high-resolution images and videos.
AIBullisharXiv โ CS AI ยท Mar 56/10
๐ง Researchers introduce GeoSeg, a zero-shot, training-free framework for AI-driven segmentation of remote sensing imagery that uses multimodal language models for reasoning without requiring specialized training data. The system addresses domain-specific challenges in satellite and aerial image analysis through bias-aware coordinate refinement and dual-route prompting mechanisms.
AINeutralarXiv โ CS AI ยท Mar 56/10
๐ง Researchers introduced WebRRSBench, a comprehensive benchmark evaluating multimodal large language models' reasoning, robustness, and safety capabilities for web understanding tasks. Testing 11 MLLMs on 3,799 QA pairs from 729 websites revealed significant gaps in compositional reasoning, UI robustness, and safety-critical action recognition.
AIBullisharXiv โ CS AI ยท Mar 56/10
๐ง Researchers successfully developed multimodal large language models for Basque, a low-resource language, finding that only 20% Basque training data is needed for solid performance. The study demonstrates that specialized Basque language backbones aren't required, potentially enabling MLLM development for other underrepresented languages.
๐ง Llama
AIBullisharXiv โ CS AI ยท Mar 46/104
๐ง A large-scale benchmarking study finds that powerful Multimodal Large Language Models (MLLMs) can extract information from business documents using image-only input, potentially eliminating the need for traditional OCR preprocessing. The research demonstrates that well-designed prompts and instructions can further enhance MLLM performance in document processing tasks.
AINeutralarXiv โ CS AI ยท Mar 37/103
๐ง Researchers introduced MMR-Life, a comprehensive benchmark with 2,646 questions and 19,108 real-world images to evaluate multimodal reasoning capabilities of AI models. Even top models like GPT-5 achieved only 58% accuracy, highlighting significant challenges in real-world multimodal reasoning across seven different reasoning types.
AINeutralarXiv โ CS AI ยท Mar 37/104
๐ง Researchers introduce Interaction2Code, the first benchmark for evaluating Multimodal Large Language Models' ability to generate interactive webpage code from prototypes. The study identifies four critical limitations in current MLLMs and proposes enhancement strategies to improve their performance on dynamic web interactions.
AIBullisharXiv โ CS AI ยท Apr 66/10
๐ง Researchers developed QAPruner, a new framework that simultaneously optimizes vision token pruning and post-training quantization for Multimodal Large Language Models (MLLMs). The method addresses the problem where traditional token pruning can discard important activation outliers needed for quantization stability, achieving 2.24% accuracy improvement over baselines while retaining only 12.5% of visual tokens.
AINeutralarXiv โ CS AI ยท Mar 266/10
๐ง Researchers introduce GeoSketch, a neural-symbolic AI framework that solves geometric problems through dynamic visual manipulation, including drawing auxiliary lines and applying transformations. The system combines perception, symbolic reasoning, and interactive sketch actions, achieving superior performance on geometric problem-solving benchmarks compared to static image processing methods.
AINeutralarXiv โ CS AI ยท Mar 176/10
๐ง Researchers introduce VTC-Bench, a comprehensive benchmark for evaluating multimodal AI models' ability to use visual tools for complex tasks. The benchmark reveals significant limitations in current models, with leading model Gemini-3.0-Pro achieving only 51% accuracy on multi-tool visual reasoning tasks.
๐ง Gemini
AIBullisharXiv โ CS AI ยท Mar 176/10
๐ง Researchers propose ES-Merging, a new framework for combining specialized biological multimodal large language models (MLLMs) by using embedding space signals rather than traditional parameter-based methods. The approach estimates merging coefficients at both layer-wise and element-wise granularities, outperforming existing merging techniques and even task-specific fine-tuned models on cross-modal scientific problems.
AIBullisharXiv โ CS AI ยท Mar 166/10
๐ง Researchers developed UNIFIER, a continual learning framework for multimodal large language models (MLLMs) to adapt to changing visual scenarios without catastrophic forgetting. The framework addresses visual discrepancies across different environments like high-altitude, underwater, low-altitude, and indoor scenarios, showing significant improvements over existing methods.
๐ข Hugging Face
AINeutralarXiv โ CS AI ยท Mar 116/10
๐ง Researchers introduce EgoCross, a new benchmark to evaluate multimodal AI models on egocentric video understanding across diverse domains like surgery, extreme sports, and industrial settings. The study reveals that current AI models, including specialized egocentric models, struggle with cross-domain generalization beyond common daily activities.
AINeutralarXiv โ CS AI ยท Mar 45/104
๐ง Researchers introduce HSSBench, a new benchmark designed to evaluate multimodal large language models (MLLMs) on Humanities and Social Sciences tasks across multiple languages. The benchmark contains over 13,000 samples and reveals significant challenges for current state-of-the-art models in cross-disciplinary reasoning.
AINeutralarXiv โ CS AI ยท Mar 36/1011
๐ง Researchers introduce LifeEval, a new multimodal benchmark designed to evaluate how well AI assistants can help humans in real-time daily life tasks from a first-person perspective. The benchmark reveals significant challenges for current AI models in providing timely and adaptive assistance in dynamic environments.
AINeutralarXiv โ CS AI ยท Mar 36/107
๐ง Researchers introduce MC-Search, the first benchmark for evaluating agentic multimodal retrieval-augmented generation (MM-RAG) systems with long, structured reasoning chains. The benchmark reveals systematic issues in current multimodal large language models and introduces Search-Align, a training framework that improves planning and retrieval accuracy.
AIBullisharXiv โ CS AI ยท Mar 36/108
๐ง FlowPortrait is a new reinforcement learning framework that uses Multimodal Large Language Models for evaluation to generate more realistic talking-head videos with better lip synchronization. The system combines human-aligned assessment with policy optimization techniques to address persistent issues in audio-driven portrait animation.
AIBullisharXiv โ CS AI ยท Mar 37/107
๐ง Researchers developed EmbedLens, a tool to analyze how multimodal large language models process visual information, finding that only 60% of visual tokens carry meaningful image-specific information. The study reveals significant inefficiencies in current MLLM architectures and proposes optimizations through selective token pruning and mid-layer injection.
AIBearisharXiv โ CS AI ยท Mar 37/108
๐ง Researchers have developed MIDAS, a new jailbreaking framework that successfully bypasses safety mechanisms in Multimodal Large Language Models by dispersing harmful content across multiple images. The technique achieved an 81.46% average attack success rate against four closed-source MLLMs by extending reasoning chains and reducing security attention.
$LINK