LightCode: Compiling LLM Inference for Photonic-Electronic Systems
By: Ryan Tomich, Zhizhen Zhong, Dirk Englund
Potential Business Impact:
Makes AI run faster and use less power.
The growing demand for low-latency, energy-efficient inference in large language models (LLMs) has catalyzed interest in heterogeneous architectures. While GPUs remain dominant, they are poorly suited for integration with emerging domain-specific accelerators like the Photonic Tensor Units (PTUs), which offer low-power, high-throughput linear computation. This motivates hybrid compilation strategies that combine photonic and electronic resources. We present LightCode, a compiler framework and simulator for mapping LLM inference workloads across hybrid photonic-electronic systems. LightCode introduces the Stacked Graph, an intermediate representation that encodes multiple hardware-specific realizations of each tensor operation. Hardware assignment is formulated as a constrained subgraph selection problem optimized for latency or energy under parametric cost models. We evaluate LightCode on the prefill stage of GPT-2 and Llama-7B showing that under our workload and hardware assumptions, (i) Photonic hardware reduced energy by up to 50% in our simulated workloads at maximum sequence length; (ii) multiplexing and assignment strategy yielded latency improvements exceeding 10x; and (iii) Optimizing for latency or energy resulted in distinct hardware mappings in our simulations. LightCode offers a module, foundational framework and simulator for compiling LLMs to emerging photonic accelerators.
Similar Papers
ENLighten: Lighten the Transformer, Enable Efficient Optical Acceleration
Emerging Technologies
Makes AI faster and use less power.
What Is Next for LLMs? Next-Generation AI Computing Hardware Using Photonic Chips
Hardware Architecture
Makes AI run much faster and use less power.
PICNIC: Silicon Photonic Interconnected Chiplets with Computational Network and In-memory Computing for LLM Inference Acceleration
Hardware Architecture
Makes AI think much faster and use less power.