MCERF: Advancing Multimodal LLM Evaluation of Engineering Documentation with Enhanced Retrieval
Researchers introduce MCERF, a multimodal retrieval framework that combines vision-language models with LLM reasoning to improve question-answering from engineering documents. The system achieves a 41.1% relative accuracy improvement over baseline RAG systems by handling complex multimodal content like tables, diagrams, and dense technical text through adaptive routing and hybrid retrieval strategies.