RT Journal Article SR Electronic T1 Automatic grading of cervical biopsies by combining full and self-supervision JF bioRxiv FD Cold Spring Harbor Laboratory SP 2022.01.14.476330 DO 10.1101/2022.01.14.476330 A1 Lubrano, Mélanie A1 Lazard, Tristan A1 Balezo, Guillaume A1 Bellahsen-Harrar, Yaëlle A1 Badoual, Cécile A1 Berlemont, Sylvain A1 Walter, Thomas YR 2022 UL http://biorxiv.org/content/early/2022/08/29/2022.01.14.476330.1.abstract AB In computational pathology, predictive models from Whole Slide Images (WSI) mostly rely on Multiple Instance Learning (MIL), where the WSI are represented as a bag of tiles, each of which is encoded by a Neural Network (NN). Slide-level predictions are then achieved by building models on the agglomeration of these tile encodings. The tile encoding strategy thus plays a key role for such models. Current approaches include the use of encodings trained on unrelated data sources, full supervision or self-supervision. While self-supervised learning (SSL) exploits unlabeled data, it often requires large computational resources to train. On the other end of the spectrum, fully-supervised methods make use of valuable prior knowledge about the data but involve a costly amount of expert time. This paper proposes a framework to reconcile SSL and full supervision, showing that a combination of both provides efficient encodings, both in terms of performance and in terms of biological interpretability. On a recently organized challenge on grading Cervical Biopsies, we show that our mixed supervision scheme reaches high performance (weighted accuracy (WA): 0.945), outperforming both SSL (WA: 0.927) and transfer learning from ImageNet (WA: 0.877). We further shed light upon the internal representations that trigger classification results, providing a method to reveal relevant phenotypic patterns for grading cervical biopsies. We expect that the combination of full and self-supervision is an interesting strategy for many tasks in computational pathology and will be widely adopted by the field.Competing Interest StatementThe authors have declared no competing interest.