Prediction of Premature Retinopathy Fundus Images Using Dense Network Model for Intelligent Portable Screening Device
Dr.B. Aruna DeviProfessor, Department of Electronics and Communication Engineering, Dr.N.G.P. Institute of Technology, Coimbatore, India. arunadevi@drngpit.ac.in0000-0001-7708-5872
Dr.S. JaganathanProfessor, Department of Electrical and Electronics Engineering, Dr.N.G.P. Institute of Technology, Coimbatore, India. jaganathan@drngpit.ac.in0000-0002-2967-7301
Dr. Parag K ShahMedical Consultant, Retina & Vitreous Services, Department of Pediatric Retina & Ocular Oncology, Aravind Eye Hospital & Post Graduate Institute of Ophthalmology, Coimbatore, India. drshahpk2002@yahoo.com0000-0002-5014-6599
Dr. Narendran VenkatapathyChief Medical Officer, Department of Pediatric Retina & Ocular Oncology, Aravind Eye Hospital & Post Graduate Institute of Ophthalmology, Coimbatore, India. narendran.venkatapathy@gmail.com0000-0001-7436-6783
Keywords: Retinopathy, Premature, Prediction, Deep Learning, Pre-processing, Telemedicine, ROP.
Abstract
Retinopathy of Prematurity (ROP) is a serious retinal condition that affects preterm babies and, if ignored, can result in irreversible blindness. The challenges are related to variability and inconsistency among observers in diagnosing ROP, so the development of an automated system for ROP prediction becomes imperative. While various methods have been explored for automated ROP diagnosis, dedicated models with satisfactory performance have been lacking. This study aims to address these gaps with the objective to construct a multi-channel dense Convolutional Neural Network (MCD-CNN) which is tailored for ROP prediction, suitable for large-scale infant screening. The process involves utilizing CLAHE pre-processing, image labelling, image denoising, making and image generation for retinal vessel prediction in fundus images. The multi-channel CNN uses the feature selection method to extract and choose features from pre-processed pictures. The findings show that the suggested model attains a noteworthy 97.5% accuracy, 98% sensitivity, and 98.5% specificity. Significantly, this outperforms both pre-trained models and deep learning classifiers. Overall, the study contributes to improving ROP diagnosis and fostering accessibility to healthcare, particularly in remote areas.