We propose a novel regularization method for hybrid quantization of neural networks, enabling efficient deployment on ultra-low power microcontrollers in embedded systems. Our approach introduces alternative regularization functions and a uniform hybrid quantization scheme targeting {2, 4, 8}-bits. The method offers flexibility to the weight matrix level, negligible costs, and seamless integration into existing 8-bit post-training quantization pipelines. Additionally, we propose novel schedule functions for regularization, addressing the critical yet often overlooked timing aspect and providing new insights into pacing quantization. Our method achieves a substantial reduction in model byte size, nearly halving it with less than 1% accuracy loss, effectively minimizing power and memory footprints on microcontrollers. Our contributions advance resource-efficient models in resource-constrained devices and the emerging field of tinyML, overcoming limitations of existing approaches and providing new perspectives on the quantization process. The practical implications of our work span diverse real-world applications, including IoT, wearables, and autonomous systems.