y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto

#multimodal News & Analysis

80 articles tagged with #multimodal. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.

80 articles
AIBullisharXiv – CS AI · Apr 77/10
🧠

V-Reflection: Transforming MLLMs from Passive Observers to Active Interrogators

Researchers introduce V-Reflection, a new framework that transforms Multimodal Large Language Models (MLLMs) from passive observers to active interrogators through a 'think-then-look' mechanism. The approach addresses perception-related hallucinations in fine-grained tasks by allowing models to dynamically re-examine visual details during reasoning, showing significant improvements across six perception-intensive benchmarks.

AIBullisharXiv – CS AI · Mar 267/10
🧠

DanQing: An Up-to-Date Large-Scale Chinese Vision-Language Pre-training Dataset

Researchers have released DanQing, a large-scale Chinese vision-language dataset containing 100 million high-quality image-text pairs curated from Common Crawl data. The dataset addresses the bottleneck in Chinese VLP development and demonstrates superior performance compared to existing Chinese datasets across various AI tasks.

AIBullishOpenAI News · Mar 177/10
🧠

Introducing GPT-5.4 mini and nano

OpenAI has introduced GPT-5.4 mini and nano, which are smaller and faster versions of GPT-5.4 designed for specific use cases. These models are optimized for coding, tool usage, multimodal reasoning, and handling high-volume API requests and sub-agent workloads.

🧠 GPT-5
AIBullisharXiv – CS AI · Mar 177/10
🧠

Rationale-Enhanced Decoding for Multi-modal Chain-of-Thought

Researchers have developed rationale-enhanced decoding (RED), a new inference-time strategy that improves chain-of-thought reasoning in large vision-language models. The method addresses the problem where LVLMs ignore generated rationales by harmonizing visual and rationale information during decoding, showing consistent improvements across multiple benchmarks.

AINeutralarXiv – CS AI · Mar 177/10
🧠

Membership Inference for Contrastive Pre-training Models with Text-only PII Queries

Researchers developed UMID, a new text-only auditing framework to detect if personally identifiable information was memorized during training of multimodal AI models like CLIP and CLAP. The method significantly improves efficiency and effectiveness of membership inference attacks while maintaining privacy constraints.

AINeutralarXiv – CS AI · Mar 117/10
🧠

OOD-MMSafe: Advancing MLLM Safety from Harmful Intent to Hidden Consequences

Researchers introduce OOD-MMSafe, a new benchmark revealing that current Multimodal Large Language Models fail to identify hidden safety risks up to 67.5% of the time. They developed CASPO framework which dramatically reduces failure rates to under 8% for risk identification in consequence-driven safety scenarios.

AIBullisharXiv – CS AI · Mar 117/10
🧠

BiCLIP: Domain Canonicalization via Structured Geometric Transformation

Researchers introduce BiCLIP, a new framework that improves vision-language models' ability to adapt to specialized domains through geometric transformations. The approach achieves state-of-the-art results across 11 benchmarks while maintaining simplicity and low computational requirements.

AIBullisharXiv – CS AI · Mar 117/10
🧠

MMGraphRAG: Bridging Vision and Language with Interpretable Multimodal Knowledge Graphs

Researchers introduce MMGraphRAG, a new AI framework that addresses hallucination issues in large language models by integrating visual scene graphs with text knowledge graphs through cross-modal fusion. The system uses SpecLink for entity linking and demonstrates superior performance in multimodal information processing across multiple benchmarks.

AIBullisharXiv – CS AI · Mar 97/10
🧠

SPARC: Concept-Aligned Sparse Autoencoders for Cross-Model and Cross-Modal Interpretability

Researchers introduced SPARC, a framework that creates unified latent spaces across different AI models and modalities, enabling direct comparison of how various architectures represent identical concepts. The method achieves 0.80 Jaccard similarity on Open Images, tripling alignment compared to previous methods, and enables practical applications like text-guided spatial localization in vision-only models.

AIBullisharXiv – CS AI · Mar 57/10
🧠

MPFlow: Multi-modal Posterior-Guided Flow Matching for Zero-Shot MRI Reconstruction

Researchers developed MPFlow, a new zero-shot MRI reconstruction framework that uses multi-modal data and rectified flow to improve medical imaging quality. The system reduces tumor hallucinations by 15% while using 80% fewer sampling steps compared to existing diffusion methods, potentially advancing AI applications in medical diagnostics.

AIBullisharXiv – CS AI · Mar 56/10
🧠

ToolVQA: A Dataset for Multi-step Reasoning VQA with External Tools

Researchers introduce ToolVQA, a large-scale multimodal dataset with 23K instances designed to improve AI models' ability to use external tools for visual question answering. The dataset features real-world contexts and multi-step reasoning tasks, with fine-tuned 7B models outperforming GPT-3.5-turbo on various benchmarks.

AINeutralarXiv – CS AI · Mar 56/10
🧠

Benchmarking MLLM-based Web Understanding: Reasoning, Robustness and Safety

Researchers introduced WebRRSBench, a comprehensive benchmark evaluating multimodal large language models' reasoning, robustness, and safety capabilities for web understanding tasks. Testing 11 MLLMs on 3,799 QA pairs from 729 websites revealed significant gaps in compositional reasoning, UI robustness, and safety-critical action recognition.

AIBullisharXiv – CS AI · Mar 46/104
🧠

OCR or Not? Rethinking Document Information Extraction in the MLLMs Era with Real-World Large-Scale Datasets

A large-scale benchmarking study finds that powerful Multimodal Large Language Models (MLLMs) can extract information from business documents using image-only input, potentially eliminating the need for traditional OCR preprocessing. The research demonstrates that well-designed prompts and instructions can further enhance MLLM performance in document processing tasks.

AIBullisharXiv – CS AI · Mar 46/104
🧠

xLLM Technical Report

xLLM is a new open-source Large Language Model inference framework that delivers significantly improved performance for enterprise AI deployments. The framework achieves 1.7-2.2x higher throughput compared to existing solutions like MindIE and vLLM-Ascend through novel architectural optimizations including decoupled service-engine design and intelligent scheduling.

AIBullisharXiv – CS AI · Mar 47/103
🧠

CAPT: Confusion-Aware Prompt Tuning for Reducing Vision-Language Misalignment

Researchers propose CAPT, a Confusion-Aware Prompt Tuning framework that addresses systematic misclassifications in vision-language models like CLIP by learning from the model's own confusion patterns. The method uses a Confusion Bank to model persistent category misalignments and introduces specialized modules to capture both semantic and sample-level confusion cues.

AIBullisharXiv – CS AI · Mar 37/103
🧠

Kiwi-Edit: Versatile Video Editing via Instruction and Reference Guidance

Researchers introduce Kiwi-Edit, a new video editing architecture that combines instruction-based and reference-guided editing for more precise visual control. The team created RefVIE, a large-scale dataset for training, and achieved state-of-the-art results in controllable video editing through a unified approach that addresses limitations of natural language descriptions.

AIBullisharXiv – CS AI · Feb 277/106
🧠

Zatom-1: A Multimodal Flow Foundation Model for 3D Molecules and Materials

Researchers introduce Zatom-1, the first foundation model that unifies generative and predictive learning for both 3D molecules and materials using a multimodal flow matching approach. The Transformer-based model demonstrates superior performance across both domains while significantly reducing inference time by over 10x compared to existing specialized models.

$ATOM
AIBullisharXiv – CS AI · Feb 277/107
🧠

Molmo2: Open Weights and Data for Vision-Language Models with Video Understanding and Grounding

Molmo2 is a new open-source family of vision-language models that achieves state-of-the-art performance among open models, particularly excelling in video understanding and pixel-level grounding tasks. The research introduces 7 new video datasets and 2 multi-image datasets collected without using proprietary VLMs, along with an 8B parameter model that outperforms existing open-weight models and even some proprietary models on specific tasks.

AIBullishOpenAI News · Aug 287/104
🧠

Introducing gpt-realtime and Realtime API updates

OpenAI has released an advanced speech-to-speech model called gpt-realtime along with significant Realtime API updates. The new capabilities include MCP server support, image input functionality, and SIP phone calling support, expanding the platform's real-time communication abilities.

AIBullishGoogle DeepMind Blog · May 207/105
🧠

Announcing Gemma 3n preview: Powerful, efficient, mobile-first AI

Google announces Gemma 3n preview, a new open-source AI model optimized for mobile devices with multimodal capabilities including audio processing. The model features a unique 2-in-1 architecture designed to enable fast, interactive AI applications directly on devices.

Page 1 of 4Next →