DecoHD: Decomposed Hyperdimensional Classification under Extreme Memory Budgets
By: Sanggeon Yun , Hyunwoo Oh , Ryozo Masukawa and more
Potential Business Impact:
Makes smart computer brains much smaller and faster.
Decomposition is a proven way to shrink deep networks without changing I/O. We bring this idea to hyperdimensional computing (HDC), where footprint cuts usually shrink the feature axis and erode concentration and robustness. Prior HDC decompositions decode via fixed atomic hypervectors, which are ill-suited for compressing learned class prototypes. We introduce DecoHD, which learns directly in a decomposed HDC parameterization: a small, shared set of per-layer channels with multiplicative binding across layers and bundling at the end, yielding a large representational space from compact factors. DecoHD compresses along the class axis via a lightweight bundling head while preserving native bind-bundle-score; training is end-to-end, and inference remains pure HDC, aligning with in/near-memory accelerators. In evaluation, DecoHD attains extreme memory savings with only minor accuracy degradation under tight deployment budgets. On average it stays within about 0.1-0.15% of a strong non-reduced HDC baseline (worst case 5.7%), is more robust to random bit-flip noise, reaches its accuracy plateau with up to ~97% fewer trainable parameters, and -- in hardware -- delivers roughly 277x/35x energy/speed gains over a CPU (AMD Ryzen 9 9950X), 13.5x/3.7x over a GPU (NVIDIA RTX 4090), and 2.0x/2.4x over a baseline HDC ASIC.
Similar Papers
LogHD: Robust Compression of Hyperdimensional Classifiers via Logarithmic Class-Axis Reduction
Machine Learning (CS)
Makes computers remember more with less energy.
DPQ-HD: Post-Training Compression for Ultra-Low Power Hyperdimensional Computing
Machine Learning (CS)
Makes smart devices work faster with less power.
D-com: Accelerating Iterative Processing to Enable Low-rank Decomposition of Activations
Hardware Architecture
Makes big computer brains run much faster.