Implementation of transformer-based LLMs with large-scale optoelectronic neurons on a CMOS image sensor platform
By: Neil Na , Chih-Hao Cheng , Shou-Chen Hsu and more
Potential Business Impact:
Makes AI run much faster and use less power.
The recent rapid deployment of datacenter infrastructures for performing large language models (LLMs) and related artificial intelligence (AI) applications in the clouds is predicted to incur an exponentially growing energy consumption in the near-term future. In this paper, we propose and analyze the implementation of the transformer model, which is the cornerstone of the modern LLMs, with novel large-scale optoelectronic neurons (OENs) constructed over the commercially available complementary metal-oxide-semiconductor (CMOS) image sensor (CIS) platform. With all of the required optoelectronic devices and electronic circuits integrated in a chiplet only about 2 cm by 3 cm in size, 175 billon parameters in the case of GPT-3 are shown to perform inference at an unprecedented speed of 12.6 POPS using only a 40 nm CMOS process node, along with a high power efficiency of 74 TOPS/W and a high area efficiency of 19 TOPS/mm2, both surpassing the related digital electronics by roughly two orders of magnitude. The influence of the quantization formats and the hardware induced errors are numerically investigated, and are shown to have a minimal impact. Our study presents a new yet practical path toward analog neural processing units (NPUs) to complement existing digital processing units.
Similar Papers
What Is Next for LLMs? Next-Generation AI Computing Hardware Using Photonic Chips
Hardware Architecture
Makes AI run much faster and use less power.
EEsizer: LLM-Based AI Agent for Sizing of Analog and Mixed Signal Circuit
Machine Learning (CS)
AI designs computer chips faster and better.
ENLighten: Lighten the Transformer, Enable Efficient Optical Acceleration
Emerging Technologies
Makes AI faster and use less power.