TimelyHLS: LLM-Based Timing-Aware and Architecture-Specific FPGA HLS Optimization
By: Nowfel Mashnoor , Mohammad Akyash , Hadi Kamali and more
Potential Business Impact:
Makes computer chips faster and smaller automatically.
Achieving timing closure and design-specific optimizations in FPGA-targeted High-Level Synthesis (HLS) remains a significant challenge due to the complex interaction between architectural constraints, resource utilization, and the absence of automated support for platform-specific pragmas. In this work, we propose TimelyHLS, a novel framework integrating Large Language Models (LLMs) with Retrieval-Augmented Generation (RAG) to automatically generate and iteratively refine HLS code optimized for FPGA-specific timing and performance requirements. TimelyHLS is driven by a structured architectural knowledge base containing FPGA-specific features, synthesis directives, and pragma templates. Given a kernel, TimelyHLS generates HLS code annotated with both timing-critical and design-specific pragmas. The synthesized RTL is then evaluated using commercial toolchains, and simulation correctness is verified against reference outputs via custom testbenches. TimelyHLS iteratively incorporates synthesis logs and performance reports into the LLM engine for refinement in the presence of functional discrepancies. Experimental results across 10 FPGA architectures and diverse benchmarks show that TimelyHLS reduces the need for manual tuning by up to 70%, while achieving up to 4x latency speedup (e.g., 3.85x for Matrix Multiplication, 3.7x for Bitonic Sort) and over 50% area savings in certain cases (e.g., 57% FF reduction in Viterbi). TimelyHLS consistently achieves timing closure and functional correctness across platforms, highlighting the effectiveness of LLM-driven, architecture-aware synthesis in automating FPGA design.
Similar Papers
SAGE-HLS: Syntax-Aware AST-Guided LLM for High-Level Synthesis Code Generation
Programming Languages
Makes computer chips design faster and better.
CorrectHDL: Agentic HDL Design with LLMs Leveraging High-Level Synthesis as Reference
Artificial Intelligence
Fixes computer chip designs made by AI.
LIFT: LLM-Based Pragma Insertion for HLS via GNN Supervised Fine-Tuning
Machine Learning (CS)
Makes computer chips run much faster automatically.