Keywords: Data Compression, Distributed Source Coding, Multi-sensor Networks, Bandwidth Allocation, Information Theory
TL;DR: We design a distributed compression framework which learns low-rank task representations and efficiently distributes bandwidth among sensors to provide a trade-off between performance and bandwidth.
Abstract: Efficient compression of correlated data is essential to minimize communication overload in multi-sensor networks.
Each sensor independently compresses the data and transmits them to a central node due to limited bandwidth.
A decoder at the central node decompresses and passes the data to a pre-trained machine learning-based task to generate the final output. Thus, it is important to compress the features that are relevant to the task.
Additionally, the final performance depends heavily on the total available bandwidth. In practice, it is common to encounter varying availability in bandwidth, and higher bandwidth results in better performance of the task.
We design a novel distributed compression framework composed of independent encoders and a joint decoder, which we call neural distributed principal component analysis (NDPCA).
NDPCA flexibly compresses data from multiple sources to any available bandwidth with a single model, reducing computing and storage overhead.
NDPCA achieves this by learning low-rank task representations and efficiently distributing bandwidth among sensors, thus providing a graceful trade-off between performance and bandwidth.
Experiments show that NDPCA improves the accuracy of object detection tasks on satellite imagery by 14% compared to an autoencoder with uniform bandwidth allocation.
Submission Number: 25
Loading