实用医学杂志 ›› 2025, Vol. 41 ›› Issue (22): 3598-3608.doi: 10.3969/j.issn.1006-5725.2025.22.019

• 医学检查与临床诊断 • 上一篇    

构建基于“无标注”眼底全景视网膜图像深度学习模型预测糖尿病肾病

朱丹1,卢万俊1(),朱颖1,曹金璐1,陈英姿2   

  1. 1.扬州大学附属江都人民医院,神经内科,(江苏 扬州 225200 )
    2.扬州大学附属江都人民医院,内分泌科,(江苏 扬州 225200 )
  • 收稿日期:2025-08-01 出版日期:2025-11-25 发布日期:2025-11-26
  • 通讯作者: 卢万俊 E-mail:xue1203@sina.com
  • 基金资助:
    扬州市基础研究计划(联合专项)卫生健康类项目(2024-2-29)

Developing an unsupervised deep learning model for diabetic nephropathy prediction using panoramic fundus retinal images

Dan ZHU1,Wanjun LU1(),Ying ZHU1,Jinlu CAO1,Yingzi CHEN2   

  1. *.Department of Neurology,Jiangdu People's Hospital Affiliated to Yangzhou University,Yangzhou 225200,Jiangsu,China
  • Received:2025-08-01 Online:2025-11-25 Published:2025-11-26
  • Contact: Wanjun LU E-mail:xue1203@sina.com

摘要:

目的 探讨基于“无标注”早期眼底病变的全景视网膜图像深度学习(deep learning,DL)模型对糖尿病肾病(DKD)预测的可行性,并评估不同双眼融合策略的增效作用。 方法 回顾性收集2022年12月至2024年3月在扬州大学附属江都人民医院内分泌科住院的353例2型糖尿病(T2DM)患者的眼底视网膜图像及临床资料。根据是否有DKD,分为DKD组(114例)和糖尿病无肾病(NDKD)组(239例)。首先,基于“U型网络”(UNet)的预训练自动勾画眼底全景视网膜图像模型,并应用此“自动勾画模型”批量处理所有患者眼底全景视网膜图像。然后,按五折交叉验证(70%:30%),基于ResNet152分别构建左、右眼DL模型。并采取不同融合策略整合双眼信息构建融合模型,包括双眼结果融合、双眼特征融合、双眼图像融合模型。最后,采用受试者工作特征(ROC)曲线的曲线下面积(AUC)对模型进行评估。通过DeLong检验比较各模型之间AUC值的差异,采用净重新分类指数(NRI)和决策曲线分析(DCA)评估不同模型之间的优越性。 结果 本研究共构建6个预测模型:临床参数模型、左眼底模型、右眼底模型、双眼图像融合模型、双眼结果融合模型和双眼特征融合模型在训练集和验证集中,基于Transformer的双眼特征融合模型AUC最高(分别为0.864和0.658)。DeLong检验显示,在训练集中,双眼特征融合模型AUC值显著高于其余5个模型(均P < 0.001);在验证集中,各模型AUC值之间差异无统计学意义(均P > 0.05)。NRI显示,在训练集和验证集中,与双眼特征融合模型比较,其余5个模型NRI(分别为-0.255、-0.244、-0.289、-0.426、-0.163和-0.060、-0.016、-0.028、-0.105、-0.033)均为负值,显示基于Transformer的双眼特征融合模型预测DKD性能最优。DCA显示,双眼特征融合模型净获益大于其余5个模型。 结论 基于“无标注”早期眼底病变的全景视网膜图像构建的DL模型可预测DKD。而基于Transformer融合策略性能最优,为未来进一步优化开发预测DKD工具,提供了一种新的思路。

关键词: 2型糖尿病, 糖尿病肾病, 卷积神经网络, 融合, 深度学习

Abstract:

Objective To explore the feasibility of a deep learning model based on early fundus lesions without manual segmentation in pan-retinal images for predicting diabetic kidney disease (DKD) and evaluating the enhancing effects of different binocular fusion strategies. Methods A retrospective cohort of 353 patients with type 2 diabetes mellitus (T2DM) admitted to the Endocrinology Department of Jiangdu People′s Hospital Affiliated to Yangzhou University between December 2022 and March 2024 was analyzed. Patients were divided into DKD (n = 114) and non-diabetic kidney disease (NDKD) (n = 239) group based on the presence of DKD. First, a U-Net-based pre-trained automatic segmentation model was developed to process panoramic fundus retinal images. Subsequently, left and right eye deep learning models were constructed using ResNet152 under a five-fold cross-validation framework (70% training, 30% validation). Three binocular fusion strategies were implemented: result fusion, feature fusion, and image fusion models. Model performance was evaluated using the area under the receiver operating characteristic (ROC) curve (AUC). DeLong test was used to compare AUC differences among models, while net reclassification index (NRI) and decision curve analysis (DCA) were used to assess clinical utility. Results Six prediction models were developed: clinical parameter model, left fundus model, right fundus model, binocular image fusion model, binocular result fusion model, and binocular feature fusion model. The Transformer-based binocular feature fusion model achieved the highest AUC in both training and validation sets (0.864 and 0.658, respectively). DeLong tests revealed significant AUC superiority of the Transformer model over the other five models in the training set (all P < 0.001), though no significant differences were observed in the validation set (all P > 0.05). NRI analysis showed negative values for all comparisons with the Transformer model (training set: -0.255, -0.244, -0.289, -0.426, -0.163; validation set: -0.060, -0.016, -0.028, -0.105, -0.033, respectively), indicating its optimal predictive performance. DCA further demonstrated greater net benefit for the Transformer-based fusion model. Conclusions The deep learning model constructed using early fundus lesions without manual segmentation in pan-retinal images can predict DKD. The Transformer-based fusion strategy present the best performance, providing a novel approach for further optimization and development of tools to predict DKD in the future.

Key words: type 2 diabetes mellitus, diabetic kidney disease, convolutional neural network, fusion, deep learning

中图分类号: