0
VisualQuest: A Benchmark for Abstract Visual Reasoning in MLLMs
arXiv:2503.19936v2 Announce Type: replace
Abstract: We introduce VisualQuest, a novel dataset designed to rigorously evaluate multimodal large language models (MLLMs) on abstract visual reasoning tasks that require the integration of symbolic, cultural, and linguistic knowledge. Unlike existing benchmarks that focus on direct image captioning or classification of realistic images, VisualQuest comprises 3,551 non-photographic, stylized images spanning four categories: Public Figures, Popular Culture, Linguistic Expressions, and Literary Works. Each image is paired with targeted questions to probe complex reasoning. We benchmark ten state-of-the-art MLLMs and find that only Gemini-2.5-flash and GPT-4o achieve strong overall performance, while 3.7 percent of the images remain unrecognized by any model, underscoring persistent challenges in multimodal understanding. Fine-grained analysis shows that Gemini excels at recognizing stylized public figures, whereas GPT-4o leads in linguistic reasoning tasks such as visual puns and emoji combinations. VisualQuest provides a comprehensive and challenging resource for advancing research in abstract visual reasoning and highlights key areas for future model improvement. The dataset is available at https://github.com/xkt88/VISUALQUEST.
Abstract: We introduce VisualQuest, a novel dataset designed to rigorously evaluate multimodal large language models (MLLMs) on abstract visual reasoning tasks that require the integration of symbolic, cultural, and linguistic knowledge. Unlike existing benchmarks that focus on direct image captioning or classification of realistic images, VisualQuest comprises 3,551 non-photographic, stylized images spanning four categories: Public Figures, Popular Culture, Linguistic Expressions, and Literary Works. Each image is paired with targeted questions to probe complex reasoning. We benchmark ten state-of-the-art MLLMs and find that only Gemini-2.5-flash and GPT-4o achieve strong overall performance, while 3.7 percent of the images remain unrecognized by any model, underscoring persistent challenges in multimodal understanding. Fine-grained analysis shows that Gemini excels at recognizing stylized public figures, whereas GPT-4o leads in linguistic reasoning tasks such as visual puns and emoji combinations. VisualQuest provides a comprehensive and challenging resource for advancing research in abstract visual reasoning and highlights key areas for future model improvement. The dataset is available at https://github.com/xkt88/VISUALQUEST.