Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Filters








275 Hits in 4.9 sec

Free-form Video Inpainting with 3D Gated Convolution and Temporal PatchGAN [article]

Ya-Liang Chang, Zhe Yu Liu, Kuan-Ying Lee, Winston Hsu
2019 arXiv   pre-print
In this paper, we introduce a deep learn-ing based free-form video inpainting model, with proposed 3D gated convolutions to tackle the uncertainty of free-form masks and a novel Temporal PatchGAN loss  ...  Free-form video inpainting is a very challenging task that could be widely used for video editing such as text removal.  ...  the global and local image features and temporal information together.  ... 
arXiv:1904.10247v3 fatcat:65qapaft2fdfrdy3nst2kdufvq

A Novel Approach for Video Inpainting Using Autoencoders

Irfan Siddavatam, Department of Information Technology, K J Somaiya College of Engineering, Mumbai, Maharashtra, India, Ashwini Dalvi, Dipti Pawade, Akshay Bhatt, Jyeshtha Vartak, Arnav Gupta
2021 International Journal of Information Engineering and Electronic Business  
From the image and video editing perspective, inpainting is used mainly in the context of generating content to fill the gaps left after removing a particular object from the image or the video.  ...  We would be using frames from the video in hand, to gather context for the background.  ...   In case of images with solid shadows, results are blurry [19] Deep Flow-Guided Video Inpainting Objectives  A spatial and temporal coherent optical flow field is generated using a deep flow completion  ... 
doi:10.5815/ijieeb.2021.06.05 fatcat:d3ajmlh4l5dthjofbpetghsely

Towards An End-to-End Framework for Flow-Guided Video Inpainting [article]

Zhen Li, Cheng-Ze Lu, Jianhua Qin, Chun-Le Guo, Ming-Ming Cheng
2022 arXiv   pre-print
In this paper, we propose an End-to-End framework for Flow-Guided Video Inpainting (E^2FGVI) through elaborately designed three trainable modules, namely, flow completion, feature propagation, and content  ...  Optical flow, which captures motion information across frames, is exploited in recent video inpainting methods through propagating pixels along its trajectories.  ...  Both spatial structure and temporal coherence are required to be considered in high-quality video inpainting.  ... 
arXiv:2204.02663v2 fatcat:f2ib4uvvdvccjpwhatq5pfndoa

An Efficient Video Inpainting Approach Using Deep Belief Network

M. Nuthal Srinivasan, M. Chinnadurai
2022 Computer systems science and engineering  
In this view, this paper presents an efficient video inpainting approach using beetle antenna search with deep belief network (VIA-BASDBN).  ...  Finally, the inpainting of the smooth and structured regions takes place using the mean and patch matching approaches respectively.  ...  [13] proposed a novel forensic refinement architecture for localizing the deep in painted region with the consideration of spatial temporal viewpoints.  ... 
doi:10.32604/csse.2022.023109 fatcat:glhyjcvztvhvbf4y3kz7a4ffiy

Learning a spatial-temporal texture transformer network for video inpainting

Pengsen Ma, Tao Xue
2022 Frontiers in Neurorobotics  
In this paper, we propose a novel and effective spatial-temporal texture transformer network (STTTN) for video inpainting.  ...  We study video inpainting, which aims to recover realistic textures from damaged frames.  ...  Acknowledgments I am very grateful to my tutor, TX, for his guidance and help.  ... 
doi:10.3389/fnbot.2022.1002453 fatcat:totcnz3gibcpfkl3g6w5p3l4fq

Flow-Guided Transformer for Video Inpainting [article]

Kaidong Zhang, Jingjing Fu, Dong Liu
2022 arXiv   pre-print
We decouple transformers along temporal and spatial dimension, so that we can easily integrate the locally relevant completed flows to instruct spatial attention only.  ...  For the sake of efficiency, we introduce window partition strategy to both spatial and temporal transformers.  ...  This work was supported by the Natural Science Foundation of China under Grants 62036005, 62022075, and 62021001, and by the Fundamental Research Funds for the Central Universities under Contract No.  ... 
arXiv:2208.06768v1 fatcat:mqfbf5ngy5aazjmtrnc7moyiyu

Exploiting Optical Flow Guidance for Transformer-Based Video Inpainting [article]

Kaidong Zhang, Jialun Peng, Jingjing Fu, Dong Liu
2023 arXiv   pre-print
Third, we decouple the transformer along the temporal and spatial dimensions, where flows are used to select the tokens through a temporally deformable MHSA mechanism, and global tokens are combined with  ...  We further exploit the flow guidance and propose FGT++ to pursue more effective and efficient video inpainting.  ...  If using image inpainting to video frames individually, the results are seldom satisfactory since they lack the temporal consistency perceived in natural videos.  ... 
arXiv:2301.10048v1 fatcat:maalabnirjbtff4aah6oactnva

VideoFACT: Detecting Video Forgeries Using Attention, Scene Context, and Forensic Traces [article]

Tai D. Nguyen, Shengbang Fang, Matthew C. Stamm
2023 arXiv   pre-print
in forensic traces introduced by video coding, and a deep self-attention mechanism to estimate the quality and relative importance of local forensic embeddings.  ...  In this paper, we show that this is due to video coding, which introduces local variation into forensic traces.  ...  We excluded videos with audio track or temporal manipulations and videos with resolution less than 1080p.  ... 
arXiv:2211.15775v2 fatcat:knqeyxwt3vgfjdx3pztrfkbsii

Error Compensation Framework for Flow-Guided Video Inpainting [article]

Jaeyeon Kang, Seoung Wug Oh, Seon Joo Kim
2022 arXiv   pre-print
The key to video inpainting is to use correlation information from as many reference frames as possible.  ...  Our approach greatly improves the temporal consistency and the visual quality of the completed videos.  ...  Also, if there is no motion in the mask and its vicinity, it is not suitable for video inpainting, which cannot trace any pixels from other frames.  ... 
arXiv:2207.10391v1 fatcat:am7wqwawprgtnjlyfkalbx4bju

The DEVIL is in the Details: A Diagnostic Evaluation Benchmark for Video Inpainting [article]

Ryan Szeto, Jason J. Corso
2022 arXiv   pre-print
Quantitative evaluation has increased dramatically among recent video inpainting work, but the video and mask content used to gauge performance has received relatively little attention.  ...  reconstruction, realism, and temporal consistency quality.  ...  Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.  ... 
arXiv:2105.05332v2 fatcat:myf3eiw3orbxhcuiyjn72oa4yy

DVI: Depth Guided Video Inpainting for Autonomous Driving [article]

Miao Liao, Feixiang Lu, Dingfu Zhou, Sibo Zhang, Wei Li, Ruigang Yang
2020 arXiv   pre-print
To get clear street-view and photo-realistic simulation in autonomous driving, we present an automatic video inpainting algorithm that can remove traffic agents from videos and synthesize missing regions  ...  To our knowledge, we are the first to fuse multiple videos for video inpainting.  ...  The method has been naturally extended to video inpainting, where not only spatial coherence but also temporal coherence are preserved.  ... 
arXiv:2007.08854v1 fatcat:7zitrsnbbrf57oiccarkhgcwci

Vision Transformer Based Video Hashing Retrieval for Tracing the Source of Fake Videos [article]

Pengfei Pei, Xianfeng Zhao, Yun Cao, Jinchuan Li, Xuyuan Lai
2022 arXiv   pre-print
We use an improved retrieval method to find the original video, named ViTHash.  ...  In addition, we designed a tool called Localizator to compare the difference between the original traced video and the fake video.  ...  , 2019QY2202 and 2020AAA0140000.  ... 
arXiv:2112.08117v2 fatcat:hxdhsy74e5aqpgxj3mt4kxiddu

Learning spatially-correlated temporal dictionaries for calcium imaging [article]

Gal Mishne, Adam S. Charles
2019 arXiv   pre-print
In this paper, we reverse the modeling and instead aim to minimize the spatial inference, while focusing on finding the set of temporal traces present in the data.  ...  We reframe the problem in a dictionary learning setting, where the dictionary contains the time-traces and the sparse coefficient are spatial maps.  ...  Diego and Hamprecht [16] extend convolutional sparse coding to video data, extracting the spatial components and their temporal activity while estimating a non-uniform and temporally varying background  ... 
arXiv:1902.03132v1 fatcat:6lkn5mfoerhj7ieguix76oj5ty

A Comprehensive Review of Deep-Learning-Based Methods for Image Forensics

Ivan Castillo Castillo Camacho, Kai Wang
2021 Journal of Imaging  
As the difficulty of using such techniques decreases, lowering the necessity of specialized knowledge has been the focus for companies who create and sell these tools.  ...  For these reasons, it is important to have tools that can help us discern the truth.  ...  They proposed to use both DCT coefficients and spatial features for the localization.  ... 
doi:10.3390/jimaging7040069 pmid:34460519 pmcid:PMC8321383 fatcat:72zd7nyaifhvlgcxv22zztpm4y

An Overview of Recent Work in Media Forensics: Methods and Threats [article]

Kratika Bhagtani, Amit Kumar Singh Yadav, Emily R. Bartusiak, Ziyue Xiang, Ruiting Shao, Sriram Baireddy, Edward J. Delp
2022 arXiv   pre-print
For each data modality, we discuss synthesis and manipulation techniques that can be used to create and modify digital media.  ...  In this paper, we review recent work in media forensics for digital images, video, audio (specifically speech), and documents.  ...  and FA8750-16-2-0173.  ... 
arXiv:2204.12067v2 fatcat:jjeaeqy5zrbwdp62uejenndcja
« Previous Showing results 1 — 15 out of 275 results