Towards An End-to-End Framework for Flow-Guided Video Inpainting
release_f2ib4uvvdvccjpwhatq5pfndoa
by
Zhen Li, Cheng-Ze Lu, Jianhua Qin, Chun-Le Guo, Ming-Ming Cheng
2022
Abstract
Optical flow, which captures motion information across frames, is exploited
in recent video inpainting methods through propagating pixels along its
trajectories. However, the hand-crafted flow-based processes in these methods
are applied separately to form the whole inpainting pipeline. Thus, these
methods are less efficient and rely heavily on the intermediate results from
earlier stages. In this paper, we propose an End-to-End framework for
Flow-Guided Video Inpainting (E^2FGVI) through elaborately designed three
trainable modules, namely, flow completion, feature propagation, and content
hallucination modules. The three modules correspond with the three stages of
previous flow-based methods but can be jointly optimized, leading to a more
efficient and effective inpainting process. Experimental results demonstrate
that the proposed method outperforms state-of-the-art methods both
qualitatively and quantitatively and shows promising efficiency. The code is
available at https://github.com/MCG-NKU/E2FGVI.
In text/plain
format
Archived Files and Locations
application/pdf 9.0 MB
file_cbiv4lyzxva2ffoewy747lvzby
|
arxiv.org (repository) web.archive.org (webarchive) |
2204.02663v2
access all versions, variants, and formats of this works (eg, pre-prints)