Keywords: Efficiency, Sensors, Layout Design
Abstract: Conventional camera sensors capture images on a uniform pixel grid, producing redundant data, high memory usage, and costly transmission regardless of the downstream task. We present BaSeOpt, a task-aware sensing framework that jointly optimizes sensor layouts and vision models for applications such as semantic segmentation. Instead of uniformly sampling every pixel, BaSeOpt allocates higher resolution to task-critical regions while sparsely sampling less informative areas, reducing acquisition overhead. To search the vast space of possible layouts, we formulate the problem as Bayesian Optimization in the latent space of a Variational Autoencoder trained on candidate layouts, enabling efficient discovery of promising configurations. Experiments demonstrate that BaSeOpt automatically identifies sensor layouts that accelerate data acquisition at the camera level, highlighting the benefits of co-optimizing sensing and inference for efficient vision systems.
Submission Number: 11
Loading