Advancing Diagnosis of Retinopathy of Prematurity: A Web-Based AI Solution
Vahid Sadeghi1 , Sina Shahparast1 , Zahra Fathollahi 1 , Negar Khalaf1 , Tahere Mahmoodi 1 , Hossein Parsaei1 , Mohammad Hossein Nowrozzadeh 2 , , Elias Khalilipour 3 , Siamak Yousefi 4 *
- Novin Pars emerging intelligent health technologies.
- Poostchi Ophthalmology Research Center, Department of Ophthalmology, School of Medicine, Shiraz University of Medical Sciences, Shiraz, Iran
- Department of Pediatric Ophthalmology, Farabi Eye Hospital, Tehran University of Medical Sciences, Tehran, Iran.
- Department of Ophthalmology, University of Tennessee, Health Science Center, Memphis, TN, USA
Abstract: Retinopathy of Prematurity (ROP) is a leading cause of blindness in children, making timely diagnosis essential for effective treatment. In this study, we introduce Novin Salamat Pars' Retinopathy of Prematurity Computer-Aided Diagnosis (NSP ROP CAD), which employs artificial intelligence (AI) technology. This web-based AI model is designed to automatically classify the stages of ROP and has been validated across three independent databases of retinal fundus images. Our goal is to create an automated, user-friendly platform that facilitates image analysis and supports clinical decision-making, allowing users to engage with the model without requiring extensive technical knowledge.
Methods: We evaluated the performance of our AI-based system using three independent databases of color fundus images. The model was integrated into a comprehensive graphical user interface (GUI) and a web-based platform tailored for clinical environments. The GUI includes automated tools for image enhancement, annotation, and classification of ROP stages, as well as the generation of diagnostic reports in PDF format. Furthermore, the platform enables visualization of key retinal regions used by the model for decision-making, enhancing interpretability.
Results: Our system, evaluated using three distinct databases, demonstrated strong performance across all datasets, achieving an overall accuracy of 85%, a precision of 86%, a sensitivity of 85%, and an F-score of 85%. The evaluation using multiple datasets enhances the model's robustness in various clinical scenarios. Additionally, the intuitive GUI streamlined the analysis of ROP images by providing tools for automated classification, anatomical segmentation, and image annotation, facilitating practical application in clinical workflows.
Conclusion: This study highlights the potential of a web-based AI model for diagnosing ROP. Validated across multiple datasets, this model demonstrates its utility in clinical settings. Integration of an explainable AI component, along with a user-friendly interface, ensures both the reliability of the model and its accessibility for healthcare providers. These advancements can significantly improve early detection and management of ROP, ultimately enhancing patient outcomes in real-world clinical environments.