Distributed LLMs and Multimodal Large Language Models: A Survey on Advances, Challenges, and Future Directions
By: Hadi Amini , Md Jueal Mia , Yasaman Saadati and more
Potential Business Impact:
Lets computers understand text, pictures, and sounds together.
Language models (LMs) are machine learning models designed to predict linguistic patterns by estimating the probability of word sequences based on large-scale datasets, such as text. LMs have a wide range of applications in natural language processing (NLP) tasks, including autocomplete and machine translation. Although larger datasets typically enhance LM performance, scalability remains a challenge due to constraints in computational power and resources. Distributed computing strategies offer essential solutions for improving scalability and managing the growing computational demand. Further, the use of sensitive datasets in training and deployment raises significant privacy concerns. Recent research has focused on developing decentralized techniques to enable distributed training and inference while utilizing diverse computational resources and enabling edge AI. This paper presents a survey on distributed solutions for various LMs, including large language models (LLMs), vision language models (VLMs), multimodal LLMs (MLLMs), and small language models (SLMs). While LLMs focus on processing and generating text, MLLMs are designed to handle multiple modalities of data (e.g., text, images, and audio) and to integrate them for broader applications. To this end, this paper reviews key advancements across the MLLM pipeline, including distributed training, inference, fine-tuning, and deployment, while also identifying the contributions, limitations, and future areas of improvement. Further, it categorizes the literature based on six primary focus areas of decentralization. Our analysis describes gaps in current methodologies for enabling distributed solutions for LMs and outline future research directions, emphasizing the need for novel solutions to enhance the robustness and applicability of distributed LMs.
Similar Papers
An Explorative Study on Distributed Computing Techniques in Training and Inference of Large Language Models
Distributed, Parallel, and Cluster Computing
Lets big AI run on normal computers.
A Survey on Collaborative Mechanisms Between Large and Small Language Models
Artificial Intelligence
Makes smart AI work on phones and less powerful devices.
LLMs4All: A Review on Large Language Models for Research and Applications in Academic Disciplines
Computation and Language
AI helps study many school subjects better.