Score: 0

PICNIC: Silicon Photonic Interconnected Chiplets with Computational Network and In-memory Computing for LLM Inference Acceleration

Published: November 6, 2025 | arXiv ID: 2511.04036v1

By: Yue Jiet Chong , Yimin Wang , Zhen Wu and more

Potential Business Impact:

Makes AI think much faster and use less power.

Business Areas:
Application Specific Integrated Circuit (ASIC) Hardware

This paper presents a 3D-stacked chiplets based large language model (LLM) inference accelerator, consisting of non-volatile in-memory-computing processing elements (PEs) and Inter-PE Computational Network (IPCN), interconnected via silicon photonic to effectively address the communication bottlenecks. A LLM mapping scheme was developed to optimize hardware scheduling and workload mapping. Simulation results show it achieves $3.95\times$ speedup and $30\times$ efficiency improvement over the Nvidia A100 before chiplet clustering and power gating scheme (CCPG). Additionally, the system achieves further scalability and efficiency improvement with the implementation of CCPG to accommodate larger models, attaining $57\times$ efficiency improvement over Nvidia H100 at similar throughput.

Country of Origin
πŸ‡ΈπŸ‡¬ Singapore

Page Count
7 pages

Category
Computer Science:
Hardware Architecture