Learning to Match Distributions for Domain Adaptation
release_sezzhmdkzzhexbe5ulax5wxjzu
by
Chaohui Yu, Jindong Wang, Chang Liu, Tao Qin, Renjun Xu, Wenjie Feng, Yiqiang Chen, Tie-Yan Liu
2020
Abstract
When the training and test data are from different distributions, domain
adaptation is needed to reduce dataset bias to improve the model's
generalization ability. Since it is difficult to directly match the
cross-domain joint distributions, existing methods tend to reduce the marginal
or conditional distribution divergence using predefined distances such as MMD
and adversarial-based discrepancies. However, it remains challenging to
determine which method is suitable for a given application since they are built
with certain priors or bias. Thus they may fail to uncover the underlying
relationship between transferable features and joint distributions. This paper
proposes Learning to Match (L2M) to automatically learn the cross-domain
distribution matching without relying on hand-crafted priors on the matching
loss. Instead, L2M reduces the inductive bias by using a meta-network to learn
the distribution matching loss in a data-driven way. L2M is a general framework
that unifies task-independent and human-designed matching features. We design a
novel optimization algorithm for this challenging objective with
self-supervised label propagation. Experiments on public datasets substantiate
the superiority of L2M over SOTA methods. Moreover, we apply L2M to transfer
from pneumonia to COVID-19 chest X-ray images with remarkable performance. L2M
can also be extended in other distribution matching applications where we show
in a trial experiment that L2M generates more realistic and sharper MNIST
samples.
In text/plain
format
Archived Files and Locations
application/pdf 3.6 MB
file_bgay6hwwr5fkffks4lbakyg7ta
|
arxiv.org (repository) web.archive.org (webarchive) |
2007.10791v3
access all versions, variants, and formats of this works (eg, pre-prints)