Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Efficient Temporally-Aware DeepFake Detection using H.264 Motion Vectors release_tqjlyv3brbbczekrqoy4tz7rlq

by Peter Grönquist, Yufan Ren, Qingyi He, Alessio Verardo, Sabine Süsstrunk

Released as a article .

2023  

Abstract

Video DeepFakes are fake media created with Deep Learning (DL) that manipulate a person's expression or identity. Most current DeepFake detection methods analyze each frame independently, ignoring inconsistencies and unnatural movements between frames. Some newer methods employ optical flow models to capture this temporal aspect, but they are computationally expensive. In contrast, we propose using the related but often ignored Motion Vectors (MVs) and Information Masks (IMs) from the H.264 video codec, to detect temporal inconsistencies in DeepFakes. Our experiments show that this approach is effective and has minimal computational costs, compared with per-frame RGB-only methods. This could lead to new, real-time temporally-aware DeepFake detection methods for video calls and streaming.
In text/plain format

Archived Content

There are no accessible files associated with this release. You could check other releases for this work for an accessible version.

"Dark" Preservation Only
Save Paper Now!

Know of a fulltext copy of on the public web? Submit a URL and we will archive it

Type  article
Stage   submitted
Date   2023-11-17
Version   v1
Language   en ?
arXiv  2311.10788v1
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Catalog Record
Revision: a42d9419-a932-4345-b61c-e00e0383239f
API URL: JSON