Space-time video completion
- 12 November 2004
- proceedings article
- Published by Institute of Electrical and Electronics Engineers (IEEE)
- Vol. 1, 120-127
- https://doi.org/10.1109/cvpr.2004.1315022
Abstract
We present a method for space-time completion of large space-time "holes" in video sequences of complex dy- namic scenes. The missing portions are filled-in by sam- pling spatio-temporal patches from the available parts of the video, while enforcing global spatio-temporal consistency between all patches in and around the hole. This is ob- tained by posing the task of video completion and synthesis as a global optimization problem with a well-defined ob- jective function. The consistent completion of static scene parts simultaneously with dynamic behaviors leads to real- istic looking video sequences. Space-time video completion is useful for a variety of tasks, including, but not limited to: (i) Sophisticated video removal (of undesired static or dynamic objects) by com- pleting the appropriate static or dynamic background infor- mation, (ii) Correction of missing/corrupted video frames in old movies, and (iii) Synthesis of new video frames to add a visual story, modify it, or generate a new one. Some examples of these are shown in the paper. We present a method for space-time completion of large space-time "holes" in video sequences of complex dynamic scenes. We follow the spirit of (10) and use non-parametric sampling, while extending it to handle static and dynamic information simultaneously. The missing video portions are filled-in by sampling spatio-temporal patches from other video portions, while enforcing global spatio-temporal con- sistency between all patches in and around the hole. Global consistency is obtained by posing the problem of video completion/synthesis as a global optimization problem with a well-defined objective function and solving it appropri- ately. The objective function states that the resulting com- pletion should satisfy the two following constraints: (i) Ev- ery local space-time patch of the video sequence shouldKeywords
This publication has 16 references indexed in Scilit:
- Navier-stokes, fluid dynamics, and image and video inpaintingPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2005
- Graphcut texturesPublished by Association for Computing Machinery (ACM) ,2003
- Image-based rendering using image-based priorsPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2003
- Learning how to inpaint from global image statisticsPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2003
- Controlled animation of video spritesPublished by Association for Computing Machinery (ACM) ,2002
- Texture mixing and texture movie synthesis using statistical learningPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2001
- Video texturesPublished by Association for Computing Machinery (ACM) ,2000
- Fast texture synthesis using tree-structured vector quantizationPublished by Association for Computing Machinery (ACM) ,2000
- Texture synthesis by non-parametric samplingPublished by Institute of Electrical and Electronics Engineers (IEEE) ,1999
- Motion Analysis for Image Enhancement: Resolution, Occlusion, and TransparencyJournal of Visual Communication and Image Representation, 1993