Accelerating HDC-CNN Hybrid Models Using Custom Instructions on RISC-V GPUs
By: Wakuto Matsumi, Riaz-Ul-Haque Mian
Potential Business Impact:
Makes computers learn much faster and use less power.
Machine learning based on neural networks has advanced rapidly, but the high energy consumption required for training and inference remains a major challenge. Hyperdimensional Computing (HDC) offers a lightweight, brain-inspired alternative that enables high parallelism but often suffers from lower accuracy on complex visual tasks. To overcome this, hybrid accelerators combining HDC and Convolutional Neural Networks (CNNs) have been proposed, though their adoption is limited by poor generalizability and programmability. The rise of open-source RISC-V architectures has created new opportunities for domain-specific GPU design. Unlike traditional proprietary GPUs, emerging RISC-V-based GPUs provide flexible, programmable platforms suitable for custom computation models such as HDC. In this study, we design and implement custom GPU instructions optimized for HDC operations, enabling efficient processing for hybrid HDC-CNN workloads. Experimental results using four types of custom HDC instructions show a performance improvement of up to 56.2 times in microbenchmark tests, demonstrating the potential of RISC-V GPUs for energy-efficient, high-performance computing.
Similar Papers
ScalableHD: Scalable and High-Throughput Hyperdimensional Computing Inference on Multi-Core CPUs
Distributed, Parallel, and Cluster Computing
Makes computers understand things faster on normal chips.
Flexible Vector Integration in Embedded RISC-V SoCs for End to End CNN Inference Acceleration
Distributed, Parallel, and Cluster Computing
Makes smart devices run AI faster and use less power.
MARVEL: An End-to-End Framework for Generating Model-Class Aware Custom RISC-V Extensions for Lightweight AI
Hardware Architecture
Makes smart devices run AI faster, using less power.