MULTI: Multimodal Understanding Leaderboard with Text and Images

Zichen Zhu Yang Xu Lu Chen Jingkai Yang Yichuan Ma Yiming Sun,  Hailin Wen, 
Jiaqi Liu,  Jinyu Cai,  Yingzi Ma Situo Zhang,  Zihan Zhao,  Liangtai Sun,  Kai Yu 
X-LANCE Lab, Department of Computer Science and Engineering  
MoE Key Lab of Artificial Intelligence, SJTU AI Institute
Shanghai Jiao Tong University, Shanghai, China

†Corresponding Authors
JamesZhutheThird@sjtu.edu.cn, xuyang0112@sjtu.edu.cn,
chenlusz@sjtu.edu.cn, kai.yu@sjtu.edu.cn
arXiv Code

🤗

Dataset

🏆

Leaderboard

📮

Submit



Rapid progress in multimodal large language models (MLLMs) highlights the need to introduce challenging yet realistic benchmarks to the academic community, while existing benchmarks primarily focus on understanding simple natural images and short context. In this paper, we present MULTI, as a cutting-edge benchmark for evaluating MLLMs on understanding complex tables and images, and reasoning with long context. MULTI provides multimodal inputs and requires responses that are either precise or open-ended, reflecting real-life examination styles. MULTI includes over 18,000 questions and challenges MLLMs with a variety of tasks, ranging from formula derivation to image detail analysis and cross-modality reasoning. We also introduce MULTI-Elite, a 500-question selected hard subset, and MULTI-Extend, with more than 4,500 external knowledge context pieces. Our evaluation indicates significant potential for MLLM advancement, with GPT-4V achieving a 63.7% accuracy rate on MULTI, in contrast to other MLLMs scoring between 28.5% and 55.3%. MULTI serves not only as a robust evaluation platform but also paves the way for the development of expert-level AI.


How can I access MULTI 🤔?

Please visit our HuggingFace page to access MULTI dataset. Our code is available on GitHub. You can get detailed scores through evaluation page. If you want to add your model in our leaderboard, please fill in this questionnaire.

BibTeX

@misc{zhu2024multi,
      title={MULTI: Multimodal Understanding Leaderboard with Text and Images},
      author={Zichen Zhu and Yang Xu and Lu Chen and Jingkai Yang and Yichuan Ma and Yiming Sun and Hailin Wen and Jiaqi Liu and Jinyu Cai and Yingzi Ma and Situo Zhang and Zihan Zhao and Liangtai Sun and Kai Yu},
      year={2024},
      eprint={2402.03173},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}