Detecting Diabetic Retinopathy from Fundus Images Using Artificial Intelligence
Sina Shahparast1 , Zahra Fatohllahi2 , Negar Khalaf2 , Vahid Sadeghi2 , Tahere Mahmoodi3 , Hossein Parsaei3 , Mohammad Hossein Norouzzadeh4 , Elias Khalili Pour5 , Siamak Yousefi6 *
- Novin Pars emerging intelligent health technologies, Shiraz, Iran.
- Novin Pars emerging intelligent health technologies, Shiraz, Iran
- Department of Medical Physics and Engineering, School of Medicine, Shiraz University of Medical Sciences, Shiraz, Iran.
- Poostchi Ophthalmology Research Center, Shiraz University of Medical Sciences, Shiraz, Iran
- Retina Service, Farabi Eye Hospital, Tehran University of Medical Sciences, Tehran, Iran
- Department of Ophthalmology, University of Tennessee Health Science Center, Memphis, TN, USA
Abstract: Diabetic retinopathy (DR) is a major cause of vision impairment worldwide, affecting nearly 4 million individuals as of 2022. Early detection of DR is crucial to prevent irreversible vision loss. However, traditional manual screening methods are time-consuming and limited by the number of available ophthalmologists. Automated diagnostic systems leveraging artificial intelligence (AI) have shown promise in addressing these limitations. In this study, we present an AI-based system for detecting DR in fundus images, which aims to enhance diagnostic accuracy while reducing the workload for healthcare professionals.
Methods: Two publicly available datasets including the Kaggle EyePACS and the Messidor-2 were used. Fundus images were first pre-processed then the retinal region was cropped, and the images were resized to 224×224 pixels for consistency with major deep learning architectures. An ensemble learning model was developed to integrate multiple convolutional neural networks (CNNs) architectures using a soft voting for detecting DR. The system classified cases into two categories: No Diabetic Retinopathy (No DR) and more-than-mild Diabetic Retinopathy (mtmDR). Performance was assessed based on accuracy, sensitivity, specificity, and the area under the receiver operating characteristic (AUROC) curve.
Results: The Messidor-2 and EyePACS datasets included 1,748 and 88,702 fundus images, respectively. Based on the Messeidor-2 dataset, the developed AI system achieved an accuracy of 90.2%, with a sensitivity of 89.7%, a specificity of 90.4%, and an AUROC score of 0.97. On the EyePACS dataset, the system demonstrated an accuracy of 86.1%, a sensitivity of 80.4%, a specificity of 90.1%, and an AUROC of 0.92. Misclassifications were from moderate stage which 24.7% patients in this group classified as no DR, while mistakes for severe and proliferative DR cases were 2.29% and 2.83% respectively. When compared to previous methods, the model showed competitive performance.
Conclusion: The proposed AI-based system has demonstrated strong performance in detecting DR, showing high sensitivity and specificity across datasets. Unlike traditional diagnostic tools, this system functions autonomously, reducing the need for intervention by ophthalmologists. This advancement has the potential to enhance the efficiency of DR screening, especially in resource-limited areas, and to alleviate the workload on healthcare professionals.