Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Learning to Transfer: Unsupervised Meta Domain Translation release_tw72ibb65nbihhd423cgg2quzu

by Jianxin Lin, Yijun Wang, Tianyu He, Zhibo Chen

Released as a article .

2019  

Abstract

Unsupervised domain translation has recently achieved impressive performance with Generative Adversarial Network (GAN) and sufficient (unpaired) training data. However, existing domain translation frameworks form in a disposable way where the learning experiences are ignored and the obtained model cannot be adapted to a new coming domain. In this work, we take on unsupervised domain translation problems from a meta-learning perspective. We propose a model called Meta-Translation GAN (MT-GAN) to find good initialization of translation models. In the meta-training procedure, MT-GAN is explicitly trained with a primary translation task and a synthesized dual translation task. A cycle-consistency meta-optimization objective is designed to ensure the generalization ability. We demonstrate effectiveness of our model on ten diverse two-domain translation tasks and multiple face identity translation tasks. We show that our proposed approach significantly outperforms the existing domain translation methods when each domain contains no more than ten training samples.
In text/plain format

Archived Files and Locations

application/pdf  2.1 MB
file_r7yefsfuhncsjoppqrq74e3nue
arxiv.org (repository)
web.archive.org (webarchive)
Read Archived PDF
Preserved and Accessible
Type  article
Stage   submitted
Date   2019-06-01
Version   v1
Language   en ?
arXiv  1906.00181v1
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Catalog Record
Revision: 23900387-4c41-412e-978d-e3bd75bb0cb6
API URL: JSON