RPEGENE-Net: A Multi-Resolution Deep Learning Framework for Predicting Gene Expression from Microscopy Images of Retinal Pigment Epithelium (RPE) Cells

Fatemeh SanieJahromi1 , Tahereh Mahmoudi2 , Neda Taghinejad3 , Mohammad Hossein Nowroozzadeh1 *

  1. Poostchi Ophthalmology Research Center, Shiraz University of Medical Sciences
  2. Department of Medical Physics, School of Medicine Nanomedicine and Nanobiology Research Center Shiraz University of Medical Sciences
  3. Faculty of Electrical and Computer Engineering, Shiraz University

Abstract: Aim: To develop a deep learning framework, RPEGENE-Net, capable of predicting gene expression profiles of retinal pigment epithelium (RPE) cells using live-cell microscopy images.

Methods: Methods: A dataset of live-cell images of RPE cells, treated with various drug regimens and captured at magnifications of 40x, 100x, 200x, and 400x, was used. Gene expression of six key genes (α-SMA, ZEB1, TGF-β, CD90, β-catenin, Snail) and treatment classes (aflibercept, bevacizumab, dexamethasone, aflibercept + dexamethasone, and untreated control) were analyzed. Twelve deep learning architectures were evaluated for regression and classification tasks. Preprocessing steps included normalization, patch extraction, and data augmentation. DenseNet121 was fine-tuned through a two-stage training process, incorporating multi-resolution imaging data for robust prediction.

Results: Results: DenseNet121 outperformed all other models in predicting gene expression and classifying treatment types. For the regression task, it achieved the highest Pearson correlation coefficient (PCC) values across almost all six genes: α-SMA (0.79), ZEB1 (0.84), TGF-β (0.83), CD90 (0.83), β-catenin (0.83), and Snail (0.86). The average mean absolute error (MAE) and average root mean square error (RMSE) on test dataset were 0.0244 and 0.1228, respectively. The R² scores ranged from 0.50 (α-SMA) to 0.74 (TGF-β), indicating strong alignment between predicted and actual gene expression values. A multi-level approach, combining data from 40x, 100x, 200x, and 400x magnifications yielded higher R² scores for almost all genes compared to single-magnification models. For the classification task, DenseNet121 achieved F1 score, precision, recall, and accuracy of 0.98, with a specificity of 0.99.

Conclusion: Conclusions: RPEGENE-Net provides a simple, cost-effective method to predict gene expression from live-cell images, with potential applications in experimental studies, RPE transplantation quality control, and broader cell-based research. Multi-magnification imaging enhances model performance, supporting its utility as a scalable tool for diverse gene expression studies.





اخبــار



برگزار کنندگان کنگره


حامیان کنگره