Lumos : Empowering Multimodal Llms With Scene Text Recognition | Awesome LLM Papers Add your paper to Awesome LLM Papers

Lumos : Empowering Multimodal Llms With Scene Text Recognition

Ashish Shenoy, Yichao Lu, Srihari Jayakumar, Debojeet Chatterjee, Mohsen Moslehpour, Pierce Chuang, Abhay Harpale, Vikas Bhardwaj, di Xu, Shicong Zhao, Longfang Zhao, Ankit Ramchandani, Xin Luna Dong, Anuj Kumar . No Venue 2024

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Efficiency Evaluation Image Text Integration Model Architecture Productivity Enhancement Question Answering Visual Contextualization

We introduce Lumos, the first end-to-end multimodal question-answering system with text understanding capabilities. At the core of Lumos is a Scene Text Recognition (STR) component that extracts text from first person point-of-view images, the output of which is used to augment input to a Multimodal Large Language Model (MM-LLM). While building Lumos, we encountered numerous challenges related to STR quality, overall latency, and model inference. In this paper, we delve into those challenges, and discuss the system architecture, design choices, and modeling techniques employed to overcome these obstacles. We also provide a comprehensive evaluation for each component, showcasing high quality and efficiency.

Similar Work