Score: 0

Implementation of transformer-based LLMs with large-scale optoelectronic neurons on a CMOS image sensor platform

Published: November 6, 2025 | arXiv ID: 2511.04136v1

By: Neil Na , Chih-Hao Cheng , Shou-Chen Hsu and more

Potential Business Impact:

Makes AI run much faster and use less power.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

The recent rapid deployment of datacenter infrastructures for performing large language models (LLMs) and related artificial intelligence (AI) applications in the clouds is predicted to incur an exponentially growing energy consumption in the near-term future. In this paper, we propose and analyze the implementation of the transformer model, which is the cornerstone of the modern LLMs, with novel large-scale optoelectronic neurons (OENs) constructed over the commercially available complementary metal-oxide-semiconductor (CMOS) image sensor (CIS) platform. With all of the required optoelectronic devices and electronic circuits integrated in a chiplet only about 2 cm by 3 cm in size, 175 billon parameters in the case of GPT-3 are shown to perform inference at an unprecedented speed of 12.6 POPS using only a 40 nm CMOS process node, along with a high power efficiency of 74 TOPS/W and a high area efficiency of 19 TOPS/mm2, both surpassing the related digital electronics by roughly two orders of magnitude. The influence of the quantization formats and the hardware induced errors are numerically investigated, and are shown to have a minimal impact. Our study presents a new yet practical path toward analog neural processing units (NPUs) to complement existing digital processing units.

Page Count
18 pages

Category
Computer Science:
Emerging Technologies