Co-training with High-Confidence Pseudo Labels for Semi-supervised Medical Image Segmentation
release_xbqmoslqh5bffj4x7o54hmlgwy
by
Zhiqiang Shen, Peng Cao, Hua Yang, Xiaoli Liu, Jinzhu Yang, Osmar R. Zaiane
2023
Abstract
High-quality pseudo labels are essential for semi-supervised semantic
segmentation. Consistency regularization and pseudo labeling-based
semi-supervised methods perform co-training using the pseudo labels from
multi-view inputs. However, such co-training models tend to converge early to a
consensus during training, so that the models degenerate to the self-training
ones. Besides, the multi-view inputs are generated by perturbing or augmenting
the original images, which inevitably introduces noise into the input leading
to low-confidence pseudo labels.
To address these issues, we propose an Uncertainty-guided
Collaborative Mean-Teacher (UCMT) for semi-supervised semantic segmentation
with the high-confidence pseudo labels. Concretely, UCMT consists of two main
components: 1) collaborative mean-teacher (CMT) for encouraging model
disagreement and performing co-training between the sub-networks, and 2)
uncertainty-guided region mix (UMIX) for manipulating the input images
according to the uncertainty maps of CMT and facilitating CMT to produce
high-confidence pseudo labels.
Combining the strengths of UMIX with CMT, UCMT can retain model disagreement
and enhance the quality of pseudo labels for the co-training segmentation.
Extensive experiments on four public medical image datasets including 2D and
3D modalities demonstrate the superiority of UCMT over the state-of-the-art.
Code is available at: https://github.com/Senyh/UCMT.
In text/plain
format
Archived Files and Locations
application/pdf 2.0 MB
file_5nlaeqh5qvdchgnor4uhbqkape
|
arxiv.org (repository) web.archive.org (webarchive) |
2301.04465v1
access all versions, variants, and formats of this works (eg, pre-prints)