Communicating Classification Results Over Noisy Channels

Published: 01 Jan 2024, Last Modified: 20 May 2025ICC 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: In this work, the problem of communicating decisions of a classifier over a noisy channel is considered. with machine learning based models being used in variety of time-sensitive applications, transmission of these decisions in a reliable and timely manner is of significant importance. To this end, we study the scenario where a probability vector (representing the decisions of a classifier) at the transmitter, needs to be transmitted over a noisy channel. Under the assumption that the distortion between the original probability vector and the reconstructed one at the receiver is measured via f-divergence, we study the trade-off between transmission latency and the distortion. We completely analyze this trade-off for the setting when uniform quantization is used to encode the probability vector, and the latency incurred is obtained via results on finite-blocklength channel capacity. Our results show that there is an interesting interplay between source distortion (i.e., distortion for the probability vector measured via f-divergence) and the subsequent channel encoding/decoding parameters; and indicate that a joint design of these parameters is crucial to navigate the latency-distortion tradeoff.
Loading