Learning to Transfer: Unsupervised Meta Domain Translation
release_tw72ibb65nbihhd423cgg2quzu
by
Jianxin Lin, Yijun Wang, Tianyu He, Zhibo Chen
2019
Abstract
Unsupervised domain translation has recently achieved impressive performance
with Generative Adversarial Network (GAN) and sufficient (unpaired) training
data. However, existing domain translation frameworks form in a disposable way
where the learning experiences are ignored and the obtained model cannot be
adapted to a new coming domain. In this work, we take on unsupervised domain
translation problems from a meta-learning perspective. We propose a model
called Meta-Translation GAN (MT-GAN) to find good initialization of translation
models. In the meta-training procedure, MT-GAN is explicitly trained with a
primary translation task and a synthesized dual translation task. A
cycle-consistency meta-optimization objective is designed to ensure the
generalization ability. We demonstrate effectiveness of our model on ten
diverse two-domain translation tasks and multiple face identity translation
tasks. We show that our proposed approach significantly outperforms the
existing domain translation methods when each domain contains no more than ten
training samples.
In text/plain
format
Archived Files and Locations
application/pdf 2.1 MB
file_r7yefsfuhncsjoppqrq74e3nue
|
arxiv.org (repository) web.archive.org (webarchive) |
1906.00181v1
access all versions, variants, and formats of this works (eg, pre-prints)