The field of Tiny Machine Learning (TinyML) has gained significant attention due to its potential to enable intelligent applications on resource-constrained devices. This review provides an in-depth analysis of the advancements in efficient neural …
We propose a novel regularization method for hybrid quantization of neural networks, enabling efficient deployment on ultra-low power microcontrollers in embedded systems. Our approach introduces alternative regularization functions and a uniform …
TinyML applications such as speech recognition, motion detection, or anomaly detection are attracting many industries and researchers thanks to their innovative and cost-effective potential. Since tinyMLOps is at an even earlier stage than MLOps, the …
We focus on a promising 1–bit weight quantization approach for neural networks that optimizes the model and the weights at the same time during training. It has a low training overhead and is hassle–free, scalable, and automatable. We show that this method 2 can generalize to n–bits quantization, granted sufficient int–n support is available on the edge device. This algorithm is model–agnostic and can integrate into any training phase of the TinyMLOps workflow, for any application.