Score: 4

Tilus: A Tile-Level GPGPU Programming Language for Low-Precision Computation

Published: April 17, 2025 | arXiv ID: 2504.12984v3

By: Yaoyao Ding , Bohan Hou , Xiao Zhang and more

BigTech Affiliations: Amazon

Potential Business Impact:

Makes AI smarter and faster using less power.

Business Areas:
GPU Hardware

Serving Large Language Models (LLMs) is critical for AI-powered applications, yet it demands substantial computational resources, particularly in memory bandwidth and computational throughput. Low-precision computation has emerged as a key technique to improve efficiency while reducing resource consumption. Existing approaches for generating low-precision kernels are limited to weight bit widths that are powers of two and suffer from suboptimal performance because of high-level GPU programming abstractions. These abstractions restrict critical optimizations, such as fine-grained register management and optimized memory access patterns, that are essential for efficient low-precision computations. In this paper, we introduce Tilus, a domain-specific language designed for General-Purpose GPU (GPGPU) computing that supports low-precision data types with arbitrary bit widths from 1 to 8 while maintaining GPU programmability. Tilus features a thread-block-level programming model, a hierarchical memory space, a novel algebraic layout system, and extensive support for diverse low-precision data types. Tilus programs are compiled into highly efficient GPU programs through automatic vectorization and instruction selection. Extensive experiments demonstrate that Tilus efficiently supports a full spectrum of low-precision data types, and outperforms state-of-the-art low-precision kernels. Compared to existing compilers such as Triton and Ladder, as well as hand-optimized kernels such as QuantLLM and Marlin, Tilus achieves performance improvements of: $1.75\times$, $2.61\times$, $1.29\times$ and $1.03\times$, respectively. We open-source Tilus at https://github.com/NVIDIA/tilus.

Country of Origin
πŸ‡¨πŸ‡¦ πŸ‡ΊπŸ‡Έ United States, Canada

Repos / Data Links

Page Count
17 pages

Category
Computer Science:
Machine Learning (CS)