Towards deployment-centric multimodal AI beyond vision and language
By: Xianyuan Liu , Jiayang Zhang , Shuo Zhou and more
Potential Business Impact:
AI learns from many things, not just pictures.
Multimodal artificial intelligence (AI) integrates diverse types of data via machine learning to improve understanding, prediction, and decision-making across disciplines such as healthcare, science, and engineering. However, most multimodal AI advances focus on models for vision and language data, while their deployability remains a key challenge. We advocate a deployment-centric workflow that incorporates deployment constraints early to reduce the likelihood of undeployable solutions, complementing data-centric and model-centric approaches. We also emphasise deeper integration across multiple levels of multimodality and multidisciplinary collaboration to significantly broaden the research scope beyond vision and language. To facilitate this approach, we identify common multimodal-AI-specific challenges shared across disciplines and examine three real-world use cases: pandemic response, self-driving car design, and climate change adaptation, drawing expertise from healthcare, social science, engineering, science, sustainability, and finance. By fostering multidisciplinary dialogue and open research practices, our community can accelerate deployment-centric development for broad societal impact.
Similar Papers
Toward AI-driven Multimodal Interfaces for Industrial CAD Modeling
Human-Computer Interaction
Helps designers build 3D models faster with AI.
Decoding the Multimodal Maze: A Systematic Review on the Adoption of Explainability in Multimodal Attention-based Models
Machine Learning (CS)
Helps understand how AI uses different information.
A systematic review of challenges and proposed solutions in modeling multimodal data
Machine Learning (CS)
Helps doctors understand sick people better.