Exploring the Relative Value of Collaborative Optimisation Pathways (Student Abstract)

Published: 2023, Last Modified: 10 Nov 2024AAAI 2023EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Compression techniques in machine learning (ML) independently improve a model’s inference efficiency by reducing its memory footprint while aiming to maintain its quality. This paper lays groundwork in questioning the merit of a compression pipeline involving all techniques as opposed to skipping a few by considering a case study on a keyword spotting model: DS-CNN-S. In addition, it documents improvements to the model’s training and dataset infrastructure. For this model, preliminary findings suggest that a full-scale pipeline isn’t required to achieve a competent memory footprint and accuracy, but a more comprehensive study is required.
Loading