Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Towards An End-to-End Framework for Flow-Guided Video Inpainting release_f2ib4uvvdvccjpwhatq5pfndoa

by Zhen Li, Cheng-Ze Lu, Jianhua Qin, Chun-Le Guo, Ming-Ming Cheng

Released as a article .

2022  

Abstract

Optical flow, which captures motion information across frames, is exploited in recent video inpainting methods through propagating pixels along its trajectories. However, the hand-crafted flow-based processes in these methods are applied separately to form the whole inpainting pipeline. Thus, these methods are less efficient and rely heavily on the intermediate results from earlier stages. In this paper, we propose an End-to-End framework for Flow-Guided Video Inpainting (E^2FGVI) through elaborately designed three trainable modules, namely, flow completion, feature propagation, and content hallucination modules. The three modules correspond with the three stages of previous flow-based methods but can be jointly optimized, leading to a more efficient and effective inpainting process. Experimental results demonstrate that the proposed method outperforms state-of-the-art methods both qualitatively and quantitatively and shows promising efficiency. The code is available at https://github.com/MCG-NKU/E2FGVI.
In text/plain format

Archived Files and Locations

application/pdf  9.0 MB
file_cbiv4lyzxva2ffoewy747lvzby
arxiv.org (repository)
web.archive.org (webarchive)
Read Archived PDF
Preserved and Accessible
Type  article
Stage   submitted
Date   2022-04-07
Version   v2
Language   en ?
arXiv  2204.02663v2
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Catalog Record
Revision: 8b6de74a-5831-482b-90a4-9190cc753534
API URL: JSON