Stop Thinking, Just Do!

Sungsoo Kim's Blog

From Multimodal LLM to Human-level AI

tagsTags

14 November 2024


Article Source


From Multimodal LLM to Human-level AI

Abstract

This is the video record of Multimodal Large Language Model (MLLM) Series Tutorial @ ACM MM 2024, Melbourne, Australia.

Tutorial homepage: https://mllm2024.github.io/ACM-MM2024/

Artificial intelligence (AI) encompasses knowledge acquisition and real-world grounding across various modalities. As a multidisciplinary research field, multimodal large language models (MLLMs) have recently garnered growing interest in both academia and industry, showing an unprecedented trend to achieve human-level AI via MLLMs. These large models offer an effective vehicle for understanding, reasoning, and planning by integrating and modeling diverse information modalities, including language, visual, auditory, and sensory data. This tutorial aims to deliver a comprehensive review of cutting-edge research in MLLMs, focusing on following key areas: MLLM architecture, modality, functionality, instructional learning, multimodal hallucination, MLLM evaluation and multimodal reasoning of MLLMs. We will explore technical advancements, synthesize key challenges, and discuss potential avenues for future research.


comments powered by Disqus