253 articles tagged with #multimodal-ai. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.
AIBullisharXiv โ CS AI ยท Mar 96/10
๐ง A comprehensive survey examines how large multimodal language models are transforming scientific research across five key areas: literature search, idea generation, content creation, multimodal artifact production, and peer review evaluation. The research highlights both the potential for AI-assisted scientific discovery and the ethical concerns regarding research integrity and misuse of generative models.
AIBullisharXiv โ CS AI ยท Mar 96/10
๐ง Researchers developed MAP (Map-Level Attention Processing), a training-free method to reduce hallucinations in Large Vision-Language Models by treating hidden states as 2D semantic maps. The approach uses attention-based operations to better leverage factual information and improve consistency between generated text and visual inputs.
AIBullisharXiv โ CS AI ยท Mar 96/10
๐ง Researchers introduce 3DThinker, a new framework that enables vision-language models to perform 3D spatial reasoning from limited 2D views without requiring 3D training data. The system uses a two-stage training approach to align 3D representations with foundation models and demonstrates superior performance across multiple benchmarks.
AIBullisharXiv โ CS AI ยท Mar 96/10
๐ง Researchers present CASA, a new approach using cross-attention over self-attention for vision-language models that maintains competitive performance while significantly reducing memory and compute costs. The method shows particular advantages for real-time applications like video captioning by avoiding expensive token insertion into language model streams.
AIBullisharXiv โ CS AI ยท Mar 96/10
๐ง Researchers introduce CARE (Contrastive Anchored REflection), a new AI training framework that improves multimodal reasoning by learning from failures rather than just successes. The method achieved 4.6 point accuracy improvements on visual-reasoning benchmarks and reached state-of-the-art results on MathVista and MMMU-Pro when tested on Qwen models.
AIBullisharXiv โ CS AI ยท Mar 65/10
๐ง Researchers propose K-Gen, a new multimodal AI framework that uses Large Language Models to generate realistic driving trajectories for autonomous vehicle simulation. The system combines visual map data with text descriptions to create interpretable keypoints that guide trajectory generation, outperforming existing baselines on major datasets.
AIBullisharXiv โ CS AI ยท Mar 66/10
๐ง Researchers propose 'Imagine,' a new zero-shot commonsense reasoning framework that enhances Pre-trained Language Models by integrating machine-generated visual signals into the reasoning pipeline. The approach demonstrates superior performance over existing zero-shot methods and even advanced large language models by addressing human reporting biases through machine imagination.
AINeutralarXiv โ CS AI ยท Mar 55/10
๐ง Researchers developed M-QUEST, a new benchmark for evaluating AI models' ability to understand and detect toxicity in internet memes. The framework identifies 10 key dimensions for meme interpretation and tests 8 open-source language models, finding that instruction-tuned models perform better but still struggle with pragmatic inference.
AIBullisharXiv โ CS AI ยท Mar 45/104
๐ง Researchers have developed VL-KGE, a new framework that combines Vision-Language Models with Knowledge Graph Embeddings to better process multimodal knowledge graphs. The approach addresses limitations in existing methods by enabling stronger cross-modal alignment and more unified representations across diverse data types.
$LINK
AINeutralarXiv โ CS AI ยท Mar 45/103
๐ง Researchers developed V-GEMS, a new multimodal AI agent architecture that improves web navigation by combining visual grounding with explicit memory systems. The system achieved a 28.7% performance improvement over existing baselines by preventing navigation loops and enabling better backtracking through structured path mapping.
AIBullishTechCrunch โ AI ยท Mar 36/104
๐ง Anthropic has launched Voice Mode for Claude Code, enhancing its AI coding platform with voice interaction capabilities. This development represents the company's strategic move to compete more effectively in the increasingly competitive AI coding assistant market.
AINeutralarXiv โ CS AI ยท Mar 36/1011
๐ง Researchers introduce LifeEval, a new multimodal benchmark designed to evaluate how well AI assistants can help humans in real-time daily life tasks from a first-person perspective. The benchmark reveals significant challenges for current AI models in providing timely and adaptive assistance in dynamic environments.
AIBullisharXiv โ CS AI ยท Mar 36/103
๐ง Researchers introduced InterSyn, a 1.8M sample dataset designed to improve Large Multimodal Models' ability to generate interleaved image-text content. The dataset includes a new evaluation framework called SynJudge that measures four key performance metrics, with experiments showing significant improvements even with smaller 25K-50K sample subsets.
AIBullisharXiv โ CS AI ยท Mar 36/108
๐ง Researchers introduce M-JudgeBench, a comprehensive benchmark for evaluating Multimodal Large Language Models (MLLMs) used as judges, and propose Judge-MCTS framework to improve judge model training. The work addresses systematic weaknesses in existing MLLM judge systems through capability-oriented evaluation and enhanced data generation methods.
AINeutralarXiv โ CS AI ยท Mar 36/108
๐ง Researchers introduce IRIS Benchmark, the first comprehensive evaluation framework for measuring fairness in Unified Multimodal Large Language Models (UMLLMs) across both understanding and generation tasks. The benchmark integrates 60 granular metrics across three dimensions and reveals systemic bias issues in leading AI models, including 'generation gaps' and 'personality splits'.
AINeutralarXiv โ CS AI ยท Mar 36/107
๐ง Researchers introduce MC-Search, the first benchmark for evaluating agentic multimodal retrieval-augmented generation (MM-RAG) systems with long, structured reasoning chains. The benchmark reveals systematic issues in current multimodal large language models and introduces Search-Align, a training framework that improves planning and retrieval accuracy.
AIBullisharXiv โ CS AI ยท Mar 37/106
๐ง Researchers have released MMCOMET, the first large-scale multimodal commonsense knowledge graph that combines visual and textual information with over 900K multimodal triples. The system extends existing knowledge graphs to support complex AI reasoning tasks like image captioning and visual storytelling, demonstrating improved contextual understanding compared to text-only approaches.
AIBullisharXiv โ CS AI ยท Mar 36/108
๐ง Researchers have developed FCN-LLM, a framework that enables Large Language Models to understand brain functional connectivity networks from fMRI scans through multi-task instruction tuning. The system uses a multi-scale encoder to capture brain features and demonstrates strong zero-shot generalization across unseen datasets, outperforming conventional supervised models.
AIBullisharXiv โ CS AI ยท Mar 36/106
๐ง Researchers introduce TripleSumm, a novel AI architecture that adaptively fuses visual, text, and audio modalities for improved video summarization. The team also releases MoSu, the first large-scale benchmark dataset providing all three modalities for multimodal video summarization research.
AINeutralarXiv โ CS AI ยท Mar 37/106
๐ง Researchers introduce ProtRLSearch, a multi-round protein search agent that uses reinforcement learning and multimodal inputs (protein sequences and text) to improve protein analysis for healthcare applications. The system addresses limitations of single-round, text-only protein search agents and includes a new benchmark called ProtMCQs with 3,000 multiple choice questions for evaluation.
AIBullisharXiv โ CS AI ยท Mar 36/104
๐ง Researchers introduce LLaVE, a new multimodal embedding model that uses hardness-weighted contrastive learning to better distinguish between positive and negative pairs in image-text tasks. The model achieves state-of-the-art performance on the MMEB benchmark, with LLaVE-2B outperforming previous 7B models and demonstrating strong zero-shot transfer capabilities to video retrieval tasks.
AIBullisharXiv โ CS AI ยท Mar 37/107
๐ง Researchers introduce MERA (Multimodal Mixture-of-Experts with Retrieval Augmentation), a new AI framework for protein active site identification that addresses challenges in drug discovery. The system achieves 90% AUPRC performance on active site prediction through hierarchical multi-expert retrieval and reliability-aware fusion strategies.
AINeutralarXiv โ CS AI ยท Mar 36/107
๐ง Researchers developed an event-based evaluation framework for LLM-generated clinical summaries of remote monitoring data, revealing that models with high semantic similarity often fail to capture clinically significant events. A vision-based approach using time-series visualizations achieved the best clinical event alignment with 45.7% abnormality recall.
$NEAR
AIBullisharXiv โ CS AI ยท Mar 36/104
๐ง Researchers have developed AI models that can decode readers' information-seeking goals solely from their eye movements while reading text. The study introduces new evaluation frameworks using large-scale eye tracking data and demonstrates success in both selecting correct goals from options and reconstructing precise goal formulations.
AINeutralarXiv โ CS AI ยท Mar 36/1010
๐ง Researchers introduce ATM-Bench, the first benchmark for evaluating AI assistants' ability to recall and reason over long-term personalized memory across multiple modalities. The benchmark reveals poor performance (under 20% accuracy) for current state-of-the-art memory systems, highlighting significant limitations in personalized AI capabilities.