Voice-Face Homogeneity Tells Deepfake
release_3dwi4owonvftzpbpjitvo6sheq
by
Harry Cheng and Yangyang Guo and Tianyi Wang and Qi Li and Xiaojun Chang and Liqiang Nie
2022
Abstract
Detecting forgery videos is highly desirable due to the abuse of deepfake.
Existing detection approaches contribute to exploring the specific artifacts in
deepfake videos and fit well on certain data. However, the growing technique on
these artifacts keeps challenging the robustness of traditional deepfake
detectors. As a result, the development of generalizability of these approaches
has reached a blockage. To address this issue, given the empirical results that
the identities behind voices and faces are often mismatched in deepfake videos,
and the voices and faces have homogeneity to some extent, in this paper, we
propose to perform the deepfake detection from an unexplored voice-face
matching view. To this end, a voice-face matching method is devised to measure
the matching degree of these two. Nevertheless, training on specific deepfake
datasets makes the model overfit certain traits of deepfake algorithms. We
instead, advocate a method that quickly adapts to untapped forgery, with a
pre-training then fine-tuning paradigm. Specifically, we first pre-train the
model on a generic audio-visual dataset, followed by the fine-tuning on
downstream deepfake data. We conduct extensive experiments over three widely
exploited deepfake datasets - DFDC, FakeAVCeleb, and DeepfakeTIMIT. Our method
obtains significant performance gains as compared to other state-of-the-art
competitors. It is also worth noting that our method already achieves
competitive results when fine-tuned on limited deepfake data.
In text/plain
format
Archived Files and Locations
application/pdf 3.1 MB
file_iroz5zqe65dirkonpbpdjl4gyq
|
arxiv.org (repository) web.archive.org (webarchive) |
2203.02195v3
access all versions, variants, and formats of this works (eg, pre-prints)