Artificial Intelligence in Glaucoma Screening: Advances, Challenges and Future Directions
DOI:
https://doi.org/10.64021/Keywords:
Artificial Intelligence, Glaucoma Screening, Fundus Imaging, Multimodal Imaging, Optical Coherence TomographyAbstract
Glaucoma is a leading cause of irreversible blindness worldwide, with many cases remaining undiagnosed due to limitations in traditional screening methods. Conventional diagnostic approaches rely on specialized equipment, trained clinicians, and subjective interpretation, restricting large-scale and early detection, particularly in resource-limited settings. Artificial intelligence (AI)-based screening methods have emerged as scalable and objective solutions for automated glaucoma detection using retinal imaging data. This review provides a comprehensive overview of recent advances in AI-driven glaucoma screening, focusing on methodological innovations, diagnostic performance, fairness considerations, and real-world implementation challenges. A systematic analysis of studies published up to early 2025 was conducted, covering AI applications in fundus photography, optical coherence tomography (OCT), and multimodal imaging. Approaches including deep learning-based classification, segmentation, and progression prediction are evaluated. Recent AI models demonstrate high diagnostic performance, with reported accuracies of 95–98% and strong sensitivity and specificity. Multimodal fusion enhances early detection and progression monitoring, while explainable AI techniques improve transparency by highlighting clinically relevant retinal regions. Fairness-aware strategies further address demographic disparities to support equitable screening. Lightweight architectures enable portable and mobile deployment for large-scale community screening. AI significantly improves the accuracy, accessibility, and scalability of glaucoma detection. Continued emphasis on data diversity, interpretability, and clinical validation is essential for sustainable integration into real-world ophthalmic practice.References
Y. C. Tham, X. Li, T. Y. Wong, H. A. Quigley, T. Aung, and C. Y. Cheng, “Global prevalence of glaucoma and projections of glaucoma burden through 2040: a systematic review and meta-analysis,” Ophthalmology, vol. 121, no. 11, pp. 2081–2090, doi: 10.1016/j.ophtha.2014.05.009.
World Health Organization, “Levels and trends in child malnutrition: Key Findings of the 2020 Edition of the Joint Child Malnutrition Estimates.” Accessed: Mar. 27, 2026. [Online]. Available: https://www.unicef.org/media/69816/file/Joint-malnutrition-estimates-2020.pdf
A. Esteva, A. Robicquet, and B. Ramsundar, “A guide to deep learning in healthcare,” Nat Med, vol. 25, no. 1, pp. 24–29, doi: 10.1038/s41591-018-0316-z.
A. Rajkomar, M. Hardt, M. D. Howell, G. Corrado, and M. H. Chin, “Ensuring fairness in machine learning to advance health equity,” Ann Intern Med, vol. 169, no. 12, pp. 866–872, doi: 10.7326/M18-1990.
D. S. W. Ting et al., “Artificial intelligence and deep learning in ophthalmology,” British Journal of Ophthalmology, vol. 103, no. 2, pp. 167–175, Feb. 2019, doi: 10.1136/bjophthalmol-2018-313173.
S. M. McKinney et al., “International evaluation of an AI system for breast cancer screening,” Nature, vol. 577, no. 7788, pp. 89–94, Jan. 2020, doi: 10.1038/s41586-019-1799-6.
G. M. Nagamani and T. Sudhakar, “An improved dynamic-layered classification of retinal diseases,” IAES International Journal of Artificial Intelligence (IJ-AI), vol. 13, no. 1, p. 417, Mar. 2024, doi: 10.11591/ijai.v13.i1.pp417-429.
K. He, X. Zhang, S. Ren, and J. Sun, “Deep Residual Learning for Image Recognition,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, Jun. 2016, pp. 770–778. doi: 10.1109/CVPR.2016.90.
M. B. Djulbegovic, H. Bair, D. J. T. Gonzalez, H. Ishikawa, G. Wollstein, and J. S. Schuman, “Artificial intelligence for optical coherence tomography in glaucoma,” Transl Vis Sci Technol, vol. 14, no. 1, doi: 10.1167/tvst.14.1.27.
P. Sharma, N. Takahashi, and T. Ninomiya, “A hybrid multi model artificial intelligence approach for glaucoma screening using fundus images,” npj Digit Med, vol. 8, no. 130, doi: 10.1038/s41746-025-01473-w.
B. Chuter, J. Huynh, and S. Hallaj, “Evaluating a foundation artificial intelligence model for glaucoma detection using color fundus photographs,” Ophthalmol Sci, vol. 5, no. 1, doi: 10.1016/j.xops.2024.100623.
E. E. Hwang, D. Chen, and Y. Han, “Utilization of image-based deep learning in multimodal glaucoma detection neural network from a primary patient cohort,” Ophthalmol Sci, vol. 5, no. 1, doi: 10.1016/j.xops.2025.100016.
M. Zeppieri et al., “Novel Approaches for the Early Detection of Glaucoma Using Artificial Intelligence,” ., vol. 14, no. 11, p. 1386, Oct. 2024, doi: 10.3390/life14111386.
W. S. Lim, H. Y. Ho, and H. C. Ho, “Use of multimodal dataset in artificial intelligence for detecting glaucoma based on fundus photographs assessed with optical coherence tomography,” BMC Med Imaging, vol. 22, no. 206, doi: 10.1186/s12880-022-00933-z.
A. S. Pathan, S. S. Deshmukh, and P. P. Choudhari, “Novel deep learning model for glaucoma detection using fusion of fundus and optical coherence tomography images,” Sensors (Basel, vol. 25, no. 14, doi: 10.3390/s25144337.
M. M. Hasan, J. Phu, H. Wang, A. Sowmya, M. Kalloniatis, and E. Meijering, “OCT-based diagnosis of glaucoma and glaucoma stages using explainable machine learning,” Sci. Rep., vol. 15, no. 1, p. 3592, Jan. 2025, doi: 10.1038/s41598-025-87219-w.
Y. Hagiwara et al., “Automatic detection of glaucoma via fundus imaging and artificial intelligence,” Comput Med Imaging Graph, vol. 105, no. 102202, doi: 10.1016/j.compmedimag.2023.102202.
A. H. Chan, C. Y. Cheung, T. H. Rim, Y. C. Tham, and C. Y. Cheng, “Artificial intelligence in glaucoma detection using color fundus photographs,” Asia Pac J Ophthalmol (Phila, vol. 9, no. 4, pp. 357–366, doi: 10.1097/APO.0000000000000313.
M. Shi, Y. Luo, and Y. Tian, “Equitable artificial intelligence for glaucoma screening,” npj Digit Med, vol. 8, no. 46, doi: 10.1038/s41746-025-01432-5.
A. R. Ran et al., “Deep learning in glaucoma with optical coherence tomography: a review,” Eye, vol. 35, no. 1, pp. 188–201, Jan. 2021, doi: 10.1038/s41433-020-01191-5.
L. Wang, X. Xu, A. K. Singh, and M. A. Frazier, “Deep learning models for early glaucoma detection: A systematic review and meta-analysis,” Sci Rep, vol. 14, no. 9821, doi: 10.1038/s41598-024-65672-1.
S. M. Lundberg and S.-I. Lee, “A Unified Approach to Interpreting Model Predictions,” in Advances in Neural Information Processing Systems, I. Guyon, U. Von Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, Eds., Curran Associates, Inc., 2017. [Online]. Available: https://proceedings.neurips.cc/paper_files/paper/2017/file/8a20a8621978632d76c43dfd28b67767-Paper.pdf
M. Tan and Q. Le, “EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks,” in Proceedings of the 36th International Conference on Machine Learning, K. Chaudhuri and R. Salakhutdinov, Eds., in Proceedings of Machine Learning Research, vol. 97. PMLR, Mar. 2019, pp. 6105–6114. [Online]. Available: https://proceedings.mlr.press/v97/tan19a.html
O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional Networks for Biomedical Image Segmentation,” 2015, pp. 234–241. doi: 10.1007/978-3-319-24574-428.
R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and B. D. Grad-CAM, “Visual explanations from deep networks via gradient-based localization,” Int J Comput Vis, vol. 128, no. 2, pp. 336–359, doi: 10.1007/s11263-019-01228-7.
M. Ghassemi, L. Oakden-Rayner, and A. L. Beam, “The false hope of current approaches to explainable artificial intelligence in health care,” Lancet Digit Health, vol. 3, no. 11, doi: 10.1016/S2589-7500(21)00208-9.
E. J. Topol, “High-performance medicine: the convergence of human and artificial intelligence,” Nat Med, vol. 25, no. 1, pp. 44–56, doi: 10.1038/s41591-018-0300-7.
Downloads
Published
Issue
Section
License
Copyright (c) 2026 Virendra Kumar Tiwari, Jitendra Agrawal, Sanjay Bajpai (Author)

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
