Multi-view coding and routing of local features in Visual Sensor Networks

Abstract
Visual Sensor Networks (VSNs) have been recently used for implementing automatic visual analysis tasks where local image features, instead of images, are compressed and transmitted to a central controller. Such features may also be compressed in a multi-view fashion, exploiting the redundancy between overlapping views. In this paper we analyze the problem of multi-view coding and routing of features in VSNs. We empirically analyze the relationship between the bitrate reduction obtained with a practical multi-view local features encoder and several geometry-based, image-based and feature-based predictors. The purpose of this analysis is to identify the most accurate, yet compact predictor of the achievable compression efficiency when jointly encoding correlated streams of local features. Then, we propose a robust optimization framework that exploits the aforementioned predictors. The proposed mathematical problem maximizes the amount of data extracted from the VSN by properly routing the streams of features, subject to capacity, interference and energy constraints, explicitly considering the uncertainty in the compression efficiency estimation. Extensive experiments on simulated VSNs show that multi-view coding maximizes the amount of data extracted from camera nodes, while the robust optimization approach provides significant improvement in uncertain scenarios compared to the optimal solution of a deterministic approach.

This publication has 22 references indexed in Scilit: