Abstract
Background Retinal pigment epithelium (RPE) aging is an important cause of vision loss. As RPE aging is accompanied by changes in cell morphological features, an accurate segmentation of RPE cells is a prerequisite to such morphology analyses. Due the overwhelmingly large cell number, manual annotations of RPE cell borders are time-consuming. Computer based methods do not work well on cells with weak or missing borders in the impaired RPE sheet regions.
Method To address such a challenge, we develop a semi-supervised deep learning approach, namely MultiHeadGAN, to segment low contrast cells from impaired regions in RPE flatmount images. The developed deep learning model has a multi-head structure that allows model training with only a small scale of human annotated data. To strengthen model learning effect, we further train our model with RPE cells without ground truth cell borders by generative adversarial networks. Additionally, we develop a new shape loss to guide the network to produce closed cell borders in the segmentation results.
Results In this study, 155 annotated and 1,640 unlabeled image patches are included for model training. The testing dataset consists of 200 image patches presenting large impaired RPE regions. The average RPE segmentation performance of the developed model MultiHeadGAN is 85.4 (correct rate), 88.8 (weighted correct rate), 87.3 (precision), and 80.1 (recall), respectively. Compared with other state-of-the-art deep learning approaches, our method demonstrates superior qualitative and quantitative performance.
Conclusions Suggested by our extensive experiments, our developed deep learning method can accurately segment cells from RPE flatmount microscopy images and is promising to support large scale cell morphological analyses for RPE aging investigations.
Competing Interest Statement
The authors have declared no competing interest.