Fast and Compact Tsetlin Machine Inference on CPUs Using Instruction-Level Optimization
By: Yefan Zeng , Shengyu Duan , Rishad Shafik and more
Potential Business Impact:
Makes computers think faster using clever tricks.
The Tsetlin Machine (TM) offers high-speed inference on resource-constrained devices such as CPUs. Its logic-driven operations naturally lend themselves to parallel execution on modern CPU architectures. Motivated by this, we propose an efficient software implementation of the TM by leveraging instruction-level bitwise operations for compact model representation and accelerated processing. To further improve inference speed, we introduce an early exit mechanism, which exploits the TM's AND-based clause evaluation to avoid unnecessary computations. Building upon this, we propose a literal Reorder strategy designed to maximize the likelihood of early exits. This strategy is applied during a post-training, pre-inference stage through statistical analysis of all literals and the corresponding actions of their associated Tsetlin Automata (TA), introducing negligible runtime overhead. Experimental results using the gem5 simulator with an ARM processor show that our optimized implementation reduces inference time by up to 96.71% compared to the conventional integer-based TM implementations while maintaining comparable code density.
Similar Papers
A Tsetlin Machine Image Classification Accelerator on a Flexible Substrate
Systems and Control
Makes smart chips bendable for health gadgets.
Event-Driven Digital-Time-Domain Inference Architectures for Tsetlin Machines
Machine Learning (CS)
Makes computer learning faster and use less power.
Scalable Bayesian Network Structure Learning Using Tsetlin Machine to Constrain the Search Space
Machine Learning (CS)
Finds causes faster for big problems.