Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Deep Automatic Natural Image Matting release_hxrm33dtdzcgpgpezjmahf7ecy

by Jizhizi Li, Jing Zhang, Dacheng Tao

Released as a article .

2021  

Abstract

Automatic image matting (AIM) refers to estimating the soft foreground from an arbitrary natural image without any auxiliary input like trimap, which is useful for image editing. Prior methods try to learn semantic features to aid the matting process while being limited to images with salient opaque foregrounds such as humans and animals. In this paper, we investigate the difficulties when extending them to natural images with salient transparent/meticulous foregrounds or non-salient foregrounds. To address the problem, a novel end-to-end matting network is proposed, which can predict a generalized trimap for any image of the above types as a unified semantic representation. Simultaneously, the learned semantic features guide the matting network to focus on the transition areas via an attention mechanism. We also construct a test set AIM-500 that contains 500 diverse natural images covering all types along with manually labeled alpha mattes, making it feasible to benchmark the generalization ability of AIM models. Results of the experiments demonstrate that our network trained on available composite matting datasets outperforms existing methods both objectively and subjectively. The source code and dataset are available at https://github.com/JizhiziLi/AIM.
In text/plain format

Archived Files and Locations

application/pdf  13.2 MB
file_26ndiqqjcjac7aij7r3v2su6sa
arxiv.org (repository)
web.archive.org (webarchive)
Read Archived PDF
Preserved and Accessible
Type  article
Stage   submitted
Date   2021-07-15
Version   v1
Language   en ?
arXiv  2107.07235v1
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Catalog Record
Revision: 6de3791a-b98f-4a3c-ae1e-2bfdf0176310
API URL: JSON