Efficient Temporally-Aware DeepFake Detection using H.264 Motion Vectors
release_tqjlyv3brbbczekrqoy4tz7rlq
by
Peter Grönquist, Yufan Ren, Qingyi He, Alessio Verardo, Sabine Süsstrunk
2023
Abstract
Video DeepFakes are fake media created with Deep Learning (DL) that
manipulate a person's expression or identity. Most current DeepFake detection
methods analyze each frame independently, ignoring inconsistencies and
unnatural movements between frames. Some newer methods employ optical flow
models to capture this temporal aspect, but they are computationally expensive.
In contrast, we propose using the related but often ignored Motion Vectors
(MVs) and Information Masks (IMs) from the H.264 video codec, to detect
temporal inconsistencies in DeepFakes. Our experiments show that this approach
is effective and has minimal computational costs, compared with per-frame
RGB-only methods. This could lead to new, real-time temporally-aware DeepFake
detection methods for video calls and streaming.
In text/plain
format
Archived Content
There are no accessible files associated with this release. You could check other releases for this work for an accessible version.
Know of a fulltext copy of on the public web? Submit a URL and we will archive it
2311.10788v1
access all versions, variants, and formats of this works (eg, pre-prints)