MULTI: Multimodal Understanding Leaderboard with Text and Images

Zichen Zhu Yang Xu Lu Chen Jingkai Yang Yichuan Ma Yiming Sun,  Hailin Wen, 
Jiaqi Liu,  Jinyu Cai,  Yingzi Ma Situo Zhang,  Zihan Zhao,  Liangtai Sun,  Kai Yu 
X-LANCE Lab, Department of Computer Science and Engineering  
MoE Key Lab of Artificial Intelligence, SJTU AI Institute
Shanghai Jiao Tong University, Shanghai, China

†Corresponding Authors
JamesZhutheThird@sjtu.edu.cn, xuyang0112@sjtu.edu.cn,
chenlusz@sjtu.edu.cn, kai.yu@sjtu.edu.cn
arXiv Code

🤗

Dataset

🏆

Leaderboard

📮

Submit



The rapid development of multimodal large language models (MLLMs) raises the question of how they compare to human performance. While existing datasets often feature synthetic or overly simplistic tasks, some models have already surpassed human expert baselines. In this paper, we present MULTI, a Chinese multimodal dataset derived from authentic examination questions. Comprising over 18,000 carefully selected and refined questions, MULTI evaluates models using real-world examination standards, encompassing image-text comprehension, complex reasoning, and knowledge recall. Additionally, We also introduce MULTI-Elite, a 500-question selected hard subset, and MULTI-Extend with more than 4,500 external knowledge context pieces for testing in-context learning capabilities. MULTI serves not only as a robust evaluation platform but also paves the way for the development of expert-level AI.

Statistics


MULTI consist of more than 18K questions and 8K images, covering 23 subjects and 4 educational levels. MULTI is one of the largest Chinese multimodal datasets in complex scientific reasoning and image understanding.

Data Annotation


Annotation Pipeline


Annotation Platform


Annotation Examples


Post-procession Examples


Question Examples

Prompts


We provide a variety of prompts for different settings for models with different image input requirements.

Leaderboard

For most of the models we provide detailed categorised results on the MULTI and MULTI-Elite subsets. For o1-like models, we only provide results on the MULTI-Elite considering the cost.


Classify by Education Level Classify by Image Number Classify by Question Type

Click on Info to expand model details. Click on MULTI or MULTI-Elite to expand detailed results.

Info MULTI MULTI-Elite
Name Modality Overall Overall

Results of different models on the MULTI and MULTI-Elite.


T: Pure-text LLM, One: Only one image in the first turn, SI: Single image in each turn, MI: Multiple images in one turn. The underline means the model must have an image as input.
JuH: Level of Junior High School, SeH: Level of Senior High School, Uni: Level of University, Driv: Chinese Driving Test, AAT: Chinese Administrative Aptitude Test.
NI: the question with pure text, SI: the question with a single image, MI: the question with multiple images.
SA: multiple-choice question with single correct answer, MA: multiple-choice question with multiple correct answers (MA Acc.: Accuracy of MA questions.), FB: fill-in-the-blank question (no more than 10 words), OP: open-ended writing question (more than 10 words).
The best-performing models are marked with 🥇(1st), 🥈(2nd), 🥉(3rd), and 🏅(Top 5). Several anchoring models (Marked with ⚓) were chosen to guide the selection process of MULTI-Elite, thus the performance of these models are relatively lower.

Error Distribution


To explore the primary causes of errors, we analyzed 165 questions (one-third of MULTI-Elite), using predictions generated by the 8 models evaluated with CoT prompt setting.


How can I access MULTI 🤔?

Please visit our HuggingFace page to access MULTI dataset. Our code is available on GitHub. You can get detailed scores through evaluation page. If you want to add your model in our leaderboard, please fill in this questionnaire.

BibTeX

@misc{zhu2024multi,
      title={MULTI: Multimodal Understanding Leaderboard with Text and Images},
      author={Zichen Zhu and Yang Xu and Lu Chen and Jingkai Yang and Yichuan Ma and Yiming Sun and Hailin Wen and Jiaqi Liu and Jinyu Cai and Yingzi Ma and Situo Zhang and Zihan Zhao and Liangtai Sun and Kai Yu},
      year={2024},
      eprint={2402.03173},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}