TinyML for Speech Recognition
By: Andrew Barovic, Armin Moin
Potential Business Impact:
Lets small devices understand many spoken words.
We train and deploy a quantized 1D convolutional neural network model to conduct speech recognition on a highly resource-constrained IoT edge device. This can be useful in various Internet of Things (IoT) applications, such as smart homes and ambient assisted living for the elderly and people with disabilities, just to name a few examples. In this paper, we first create a new dataset with over one hour of audio data that enables our research and will be useful to future studies in this field. Second, we utilize the technologies provided by Edge Impulse to enhance our model's performance and achieve a high Accuracy of up to 97% on our dataset. For the validation, we implement our prototype using the Arduino Nano 33 BLE Sense microcontroller board. This microcontroller board is specifically designed for IoT and AI applications, making it an ideal choice for our target use case scenarios. While most existing research focuses on a limited set of keywords, our model can process 23 different keywords, enabling complex commands.
Similar Papers
Wireless Hearables With Programmable Speech AI Accelerators
Sound
Makes earbuds understand speech better, anywhere.
TF-MLPNet: Tiny Real-Time Neural Speech Separation
Sound
Clears background noise so you hear speech better.
On-Sensor Convolutional Neural Networks with Early-Exits
Machine Learning (CS)
Makes smart sensors use less power.