PT - JOURNAL ARTICLE AU - Dina Abdelhafiz AU - Jinbo Bi AU - Reda Ammar AU - Clifford Yang AU - Sheida Nabavi TI - Convolutional neural network for automated mass segmentation in mammography AID - 10.1101/2020.12.01.406975 DP - 2020 Jan 01 TA - bioRxiv PG - 2020.12.01.406975 4099 - http://biorxiv.org/content/early/2020/12/02/2020.12.01.406975.short 4100 - http://biorxiv.org/content/early/2020/12/02/2020.12.01.406975.full AB - Background Automatic segmentation and localization of lesions in mammogram (MG) images are challenging even with employing advanced methods such as deep learning (DL) methods. We developed a new model based on the architecture of the semantic segmentation U-Net model to precisely segment mass lesions in MG images. The proposed end-to-end convolutional neural network (CNN) based model extracts contextual information by combining low-level and high-level features. We trained the proposed model using huge publicly available databases, (CBIS-DDSM, BCDR-01, and INbreast), and a private database from the University of Connecticut Health Center (UCHC).Results We compared the performance of the proposed model with those of the state-of-the-art DL models including the fully convolutional network (FCN), SegNet, Dilated-Net, original U-Net, and Faster R-CNN models and the conventional region growing (RG) method. The proposed Vanilla U-Net model outperforms the Faster R-CNN model significantly in terms of the runtime and the Intersection over Union metric (IOU). Training with digitized film-based and fully digitized MG images, the proposed Vanilla U-Net model achieves a mean test accuracy of 92.6%. The proposed model achieves a mean Dice coefficient index (DI) of 0.951 and a mean IOU of 0.909 that show how close the output segments are to the corresponding lesions in the ground truth maps. Data augmentation has been very effective in our experiments resulting in an increase in the mean DI and the mean IOU from 0.922 to 0.951 and 0.856 to 0.909, respectively.Conclusions The proposed Vanilla U-Net based model can be used for precise segmentation of masses in MG images. This is because the segmentation process incorporates more multi-scale spatial context, and captures more local and global context to predict a precise pixel-wise segmentation map of an input full MG image. These detected maps can help radiologists in differentiating benign and malignant lesions depend on the lesion shapes. We show that using transfer learning, introducing augmentation, and modifying the architecture of the original model results in better performance in terms of the mean accuracy, the mean DI, and the mean IOU in detecting mass lesion compared to the other DL and the conventional models.Competing Interest StatementThe authors have declared no competing interest.DLdeep learningMGmammogramCNNsconvolutional neural networksCADcomputer-aided detectionMLmachine learningTLtransfer learningRGregion growingSVMsupport vector machineDDSMdigital database for screening mammographyROIsregion of interestsGTMsground truth mapsBCDRbreast cancer digital repositoryBNbatch normalizationReLUrectified liner unitSFMscreen-film mammographyFFDMdigital mammographyUCHCDMuniversity of Connecticut health center digital mammogramE2Eend-to-endCLAHEcontrast limited adaptive histogram equalizationAMFadaptive median filterR-CNNregion-based convolutional neural networkYOLOyou only look onceRPNregion proposal networkAUCarea under the receiver operating curveDIdice indexACCaccuracyIOUintersection over unionTPtrue positiveFNfalse negativeTNtrue negativeFPfalse positiveFPRfalse positive rateTPRtrue positive rateAugaugmentationFCLfully connected layerFCNfully convolutional network