Self-Supervised Learning for Early-Stage Glaucoma Detection Using Fundus Images
Keywords:
self-supervised learning, contrastive learning, glaucoma detection, fundus images, medical image analysisAbstract
Deep learning techniques have proven to be rather effective as far as the field of medical image analysis is concerned. However, a major limitation to the clinical usage of such solutions is that they require large amounts of annotated data, a limitation that is especially burdensome in the domain of ophthalmology, in which manual annotation is both time-consuming and expensive. We propose in this manuscript a self-supervised deep learning model that detects glaucomatous retina fundus during retina fundus photography to reduce the use of a labeled data set, without negatively impacting or marginally, but significantly, improving diagnostic performance. The suggested methodology is a two-step training procedure. First, the ResNet-18 encoder is trained through contrastive learning with a SimCLR architecture on a set of unlabeled fundus images, which in turn, makes it possible to extract semantically rich image representations. Afterward, an encoder is frozen, and a lightweight classifier is trained using a small number of glaucoma cased. Analysis of the ACRIMA, Drishti_GS and Rim-One-R2 datasets shows that our model is doing an excellent job of weak supervision. Comparative experiments attest that self-supervised pretraining is more effective than training frozen, with an area under the receiver operating characteristic curve (AUC) of 0.947 (94.7%). These results show that self-supervised learning is an opportunity that has the potential to build clinically deployable models in a situation where annotated medical imagery is limited.