Comparison of Predictive-Corrective Video Coding Filters for Real-Time FPGA-based Lossless Compression in Multi-Camera Systems

Published: 01 Jan 2015, Last Modified: 18 Nov 2024FPGAworld 2015EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Combining multiple cameras in a bigger multi-camera system give the opportunity to realize novel concepts (e.g. omnidirectional video, view interpolation) in real-time. The better the quality, the more data that is needed to be captured. As more data has a direct impact on storage space and communication bandwidth, it is preferable to reduce the load by compressing the size. This cannot come at the expense of latency, because the main requirement is real-time data processing for multi-camera video applications. Also, all the image details need to be preserved for improving the computational usage in a later stage. Therefore, this research is focused on predictive-corrective coding filters with entropy encoding (i.e. Huffman coding) and apply these on the raw image sensor data to compress the huge amount of data in a lossless manner. This technique does not need framebuffers, nor does it introduce any additional latency. At maximum, there will be some line-based latency, in order to combine multiple compressed pixels in one communication package. It has a lower compression factor as lossy image compression algorithms, but it does not remove human invisible image features that are crucial in disparity calculations, matching, video stitching and 3D model synthesis. This paper compares various existing predictive-corrective coding filters after they have been optimized to work on raw sensor data with a color filter array (i.e. Bayer pattern). The intention is to develop an efficient implementation for System-on-Chip (SoC) architectures to improve the computational multi-camera systems.
Loading