Deep learning (DL) has been widely used in ophthalmic disease diagnosis and achieved comparable performance with senior ophthalmologists in less time. It is expected to accelerate the efficiency of image labeling in clinical trials.In this retrospective study, Fan and colleagues investigated the diagnostic performance of DL algorithms in classifying the fundus photos from the OHTS cohort into glaucomatous and non-glaucomatous. ResNet-50 was used as the backbone of the DL algorithm. The fundus photos of 1147 OHTS participants were included in the training set, 167 in the validation set, and 322 in the test set. Three external test sets from the USA, Japan, and China were used to evaluate the generalizability of the DL algorithm.For the OHTS endpoints based on optic disc or visual field changes, The DL model achieved an AUROC of 0.91 (95% CI, 0.88-0.94) and 0.86 (95% CI, 0.76-0.93), respectively. That indicates the DL algorithm is able to detect the glaucomatous eyes with either structural changes or functional changes with relatively high accuracy. However, the diagnostic performance of the DL model developed based on the optic disc endpoint was worse in the external datasets, with AUROCs ranging from 0.74 to 0.79, possibly due to the different ethnicity, gender and disease distribution in these datasets. In general, the above results suggest that the DL algorithms may help standardize and accelerate the data labeling in large clinical trials, decreasing the number of image graders and improving the consistency of the labels.The limitations of this study should also be noted. Firstly, the performance of the DL algorithms fell in the external datasets. Better generalizability is necessary for clinical deployment, since the DL algorithms may receive imaging data from various sources. Secondly, poor-quality photos are excluded but are frequently seen in clinical practice. A step of quality control either by ophthalmologists or by DL algorithms is helpful for the application of diagnostic algorithms in the real world.