A Cross-Layer Design for Perceptual Optimization Of H.264/SVC with Unequal Error Protection

Abstract
Delivering high perceptual quality video over wireless channels is challenging due to the changing channel quality and the variations in the importance of one source packet to the next for the end-user's perceptual experience. Leveraging perceptual metrics in concert with link adaptation to maximize perceptual quality and satisfy real-time delay constraints is largely unexplored. We introduce an APP/MAC/PHY cross-layer architecture that enables optimizing perceptual quality for delay-constrained scalable video transmission. We propose an online QoS-to-QoE mapping technique to quantify the loss visibility of packets from each video layer using the ACK history and perceptual metrics. At the PHY layer, we develop a link adaptation technique that uses the QoS-to-QoE mapping to provide perceptually-optimized unequal error protection per layer according to packet loss visibility. At the APP layer, the source rate is adapted by selecting the set of temporal and quality layers to be transmitted based on the channel statistics, source rates, and playback buffer state. The proposed cross-layer optimization framework allows the channel to adapt at a faster time scale than the video codec. Furthermore, it provides a tradeoff between playback buffer occupancy and perceptual quality. We show that the proposed architecture prevents playback buffer starvation, provides immunity against short-term channel fluctuations, regulates the buffer size, and achieves a 30% increase in video capacity versus throughput-optimal link adaptation.