Microsoft Word - ETASR_V12_N1_pp8204-8209 Engineering, Technology & Applied Science Research Vol. 12, No. 1, 2022, 8204-8209 8204 www.etasr.com Tuyet et al.: Improving the Curvelet Saliency with Deep Convolutional Neural Networks for Diabetic … Improving the Curvelet Saliency and Deep Convolutional Neural Networks for Diabetic Retinopathy Classification in Fundus Images Vo Thi Hong Tuyet Department of Information Systems Faculty of Computer Science and Engineering Ho Chi Minh City University of Technology (HCMUT) Vietnam National University-Ho Chi Minh City Ho Chi Minh City, Vietnam vthtuyet.sdh19@hcmut.edu.vn Nguyen Thanh Binh Department of Information Systems Faculty of Computer Science and Engineering Ho Chi Minh City University of Technology (HCMUT) Vietnam National University-Ho Chi Minh City Ho Chi Minh City, Vietnam ntbinh@hcmut.edu.vn Dang Thanh Tin Information Systems Engineering Laboratory Faculty of Electrical and Electronics Engineering Ho Chi Minh City University of Technology (HCMUT) Vietnam National University-Ho Chi Minh City Ho Chi Minh City, Vietnam dttin@hcmut.edu.vn Abstract-Retinal vessel images give a wide range of the abnormal pixels of patients. Therefore, classifying the diseases depending on fundus images is a popular approach. This paper proposes a new method to classify diabetic retinopathy in retinal blood vessel images based on curvelet saliency for segmentation. Our approach includes three periods: pre-processing of the quality of input images, calculating the saliency map based on curvelet coefficients, and classifying VGG16. To evaluate the results of the proposed method STARE and HRF datasets are used for testing with the Jaccard Index. The accuracy of the proposed method is about 98.42% and 97.96% with STARE and HRF datasets respectively. Keywords-saliency; VGG16; classification; diabetic retinopathy; retinal blood vessel I. INTRODUCTION Diabetes is a condition that occurs when the pancreas does not produce enough insulin or when the body loses its ability to metabolize insulin. Manifestations of diabetic retinopathy include microaneurysms, intraretinal hemorrhage, hard exudates, macular edema, macular ischemia, neovascularization, vitreous hemorrhage, and traction retinal detachment. The fundus complication of diabetes is called diabetic retinopathy, which damages the small blood vessels in the retina. The retina is the light-sensitive area of the eyeball, where nerve cells receive images and send them to the brain for processing. The macula is most important for the most delicate images. Diabetic retinopathy is the leading cause of vision loss or blindness in developed countries. Therefore, the identification of diabetic retinopathy through retinal images is very important. Recent advances in computer science, focused on machine learning to detect data patterns, can provide solutions for diabetic retinopathy disease detection through retinal imaging. The detection of bright lesions in the retinal blood vessels is hard work, but can help doctors in diagnosis and treatment. Retinopathy is widely researched [1-5] and many different methods such as image saliency analysis [6, 7], machine learning techniques [8-10], wavelet and support vector machine [11], segmentation [12, 13], morphology [27], etc., are used. Authors in [10] proposed a method to identify diabetes from retinal images using the DiaNet model with a multi-stage Convolutional Neural Network (CNN). Authors in [14] proposed a method for the detection of fundus image lesions by using a CNN to search with the shuffled frog leaping algorithm. Authors in [15] used the discrete image clustering technique to separate the image foreground and background of the input image in order to classify the lesion results. Authors in [16] proposed a method to segment premature infant retinal images to achieve an extracted retinal image with a map of blood vessels. Authors in [17] suggested a contour detection method. They computed the image gradient value by applying Type- 2 fuzzy rules to detect edges. Authors in [18] used a CNN model to design a learning method for grading fundus images. However, the system was validated on a small dataset. Authors in [19] proposed a method for diabetic retinopathy detection based on U-Net and ResNet-18. In their experiments, they assessed two segmentation images. This method gives Corresponding author: Nguyen Thanh Binh Engineering, Technology & Applied Science Research Vol. 12, No. 1, 2022, 8204-8209 8205 www.etasr.com Tuyet et al.: Improving the Curvelet Saliency with Deep Convolutional Neural Networks for Diabetic … results with accuracy, but its complexity is high. Authors in [20] proposed a method for retina image recognition. This method includes two phases: finding features for healthy retinal image recognition and using vascular and lesion-based features for diabetic retinopathy retinal image recognition. Their experimental results were 100%, 96.67% and 97.78% for the considered databases. This paper proposes a method for diabetic retinopathy classification based on the improvement of the second general wavelet transform with the VGG-16 CNN. To evaluate the results of the proposed method and compare them with the results of other known methods, two (STARE and HRF) open datasets were used for testing and the evaluation criterion of Jaccard Index (JI) was utilized. The accuracy of the proposed method is about 98.42% and 97.96% for STARE and HRF datasets respectively. The main contributions of this study are: • The proposal of the deep VGG16 to classify diabetic retinopathy in fundus images. • Increased accuracy of classification by curvelet saliency combined with VGG16. II. CLASSIFYING DIABETIC RETINOPATHY WITH CURVELET SALIENCY AND VGG16 The quality of retinal blood vessel images is affected by a wide range of reasons such as: noise, motion which creates blur images, and overlapping. The diabetic classification based on feature extraction is a popular state-of-the-art approach. This section presents the proposed method for diabetic retinopathy classification with curvelet saliency and VGG16 which is shown in Figure 1. Fig. 1. The proposed method for diabetes classification. The proposed method can be divided into 3 stages which depend on the characteristics of each step: pre-processing, curvelet saliency, and classification. The input data are the retinal blood vessel images of the system. Green is the color channel chosen to be processed. The CLAHE redistributes the lightness values of the objects. The top-hat transform for mask making creates the morphology for the next stages. The 3 above steps are the preprocessing of the proposed method. The aim of preprocessing is to enhance the quality of the blood vessels in fundus images. Secondly, the curvelet coefficients are calculated based on the curvelet transform with DB4 in the decomposition steps. This value is applied in each level of the top-hat transform to form the threshold for saliency levels. The output is the curvelet saliency map for the final periods. Finally, the VGG16 model classifies the images as diabetic or not. A. Improving the Features of Retinal Blood Vessel Images The aim of this step is to improve the features of fundus images. The thickness of blood vessels varies from 1 to 5 pixels. Enhancing quality starts from the similar color and the approximation of the surrounding pixels. The color of fundus images includes three color channels, i.e. Red-Green-Blue. Among them, Red is the saturated channel, Green is the lighting, and Blue is the contrast of vessels with the background. In our method, the Green color is chosen because of the necessary lighting in blood detection. Then, CLAHE is performed with the following steps: • Not overlapping contextual areas and preserving 8×8 pixel blocks. • Calculating the average number of pixels for preparing the division as in (1): ���� � ���� �� �/�� �� (1) where ���� is the average number of pixels, �� �� represents the gray levels of the divided areas, NrX and NrY are the number of pixels for the X- and Y-dimension respectively. • From the average number of pixels, their product with the clipped pixels can be created with the i-th gray level. These values are the histogram of the clipped pixels. Then, the conditions for histogram level setup are compared to the average to keep the better value. • Applying the condition and probability density as in (2): ������� � ������ ������� ����� ������ ������ �� ! (2) where " = 0.04 is the scaling parameter of Rayleigh distribution, y(i) is the Rayleigh forward transform, and ymin is the lower bound of the pixel value. • Updating the artifacts with the average histogram and probability density. • The final step in pre-processing is the top-hat transform. This transform creates the mask making and the subtraction process. The proposed method is divided into 2 levels (high and low) based on the number of distances between the gray levels of these pixels and the average histogram in the CLAHE step. The output of this step is the high-level and low-level curvelet coefficients. Engineering, Technology & Applied Science Research Vol. 12, No. 1, 2022, 8204-8209 8206 www.etasr.com Tuyet et al.: Improving the Curvelet Saliency with Deep Convolutional Neural Networks for Diabetic … B. Curvelet Coefficients for Choosing the Saliency Map The input of this step is the high and low level of the previous step. In each level, this approach uses the stages as in Figure 2. The decomposition is done with DB4 for division and subband creation. Fig. 2. The curvelet coefficients for choosing saliency. The subband wj with a block size bj. in each level of fundus images f is noted as ∆�. An appropriate scale of sidelength ~2-s and Q w is a collection of smooth windows localized around dyadic squares of fundus images as in (3): 1 1 2 2 [ /2 ,( 1) / 2 ] [ /2 ,( 1) / 2 ] s s s s Q k k k k= + × + (3) Then, the curvelet coefficients are calculated as follows: • Renormalizing in each unit scale. • Analyzing via the discrete Ridgelet transform in each unit scale. • Update the subbands of blocks based on the double value if their number is odd. However, if the block's number is even, the updates do not work. • In reversion, the curvelet domain with coefficients K depends on the number of scales in the list of subbands. The saliency level R(x) at pixel x is presented with level K. The curvelet saliency for final saliency prediction will be done with the algorithm described below: Input: R(x) and K Output: the saliency prediction Function cal_curvletSaliency(R(x), K) The superpixel centers S = the average pixel i in grid while iteration t from 1 to v do: The association between pixel p and superpixel i is: ℚ%&' � e�)*+�,- ./0)� superpixel centers = ∑association in K end while return distance(pixel x with superpixel centers) End function The final output of this period is the saliency map of the areas of blood vessels in fundus images. The curvelet coefficient K is the condition for the association of superpixel centers in a map. C. VGG16 in the Curvelet Saliency Map for Diabetes Classification The stage after the feature extraction of image segmentation from curvelet saliency is the classification by VGG16 with the architecture shown in Figure 3. Fig. 3. Applying VGG16 for feature extraction in the curvelet saliency map. In this architecture, each level of the curvelet saliency will be submitted to the following procedures: • Block 1: 2 convolution + 1 pooling • Block 2: 2 convolution + 1 pooling • Block 3: 3 convolution + 1 pooling • Block 4: 3 convolution + 1 pooling • Block 5: 3 convolution + 1 pooling • Block 6: Dense with 3 fully connected + ReLU • Softmax at the end. At first, each input image is downsized with a 3×3 channel. The channel of the second block is 64×64 and the size is doubled for the below blocks. Therefore, the channels of block 3, 4 and 5 are 128×128, 256×256, and 512×512. The padding is 2 pixels for 3×3 convolution layers. This size is the smallest size for the notion of left/right, up/down, and center. The stride of the convolution layer is also 2 pixels with the above padding. Then, the pooling layer (max pooling) has a window size of 2×2 pixels and the stride has also 2 pixels. In block 6, the first and second fully connected layers have 1×1×4096 channels. However, the final fully connected layer is 1×1×1000 channels. The softmax (ranging from 0 to 1) gives the final classification as diabetes or non-diabetes in each level of the curvelet saliency. The differences between VGG16 in this paper with the traditional VGG16 are the stride and the multi- model applied in each level of the curvelet saliency. The synthesis of the multi-model (multi-results of softmax) is the max value. Engineering, Technology & Applied Science Research Vol. 12, No. 1, 2022, 8204-8209 8207 www.etasr.com Tuyet et al.: Improving the Curvelet Saliency with Deep Convolutional Neural Networks for Diabetic … III. EXPERIMENTATION AND EVALUATION RESULTS A. Datasets Used The proposed method is actualized in the STARE [21] and HRF (High-Resolution Fundus) [22] datasets. These datasets contain images of normal or diseased retinal blood vessels. These 2 datasets are public and free to be used in research and academic tasks. During the experiments 70% of the STARE dataset was used for training and 30% for testing. The HRF dataset was used for testing (100%). The STARE dataset [21] consists of 402 images (91 diabetes and 311 non-diabetes). Their size is 605×700 pixels with 24 bits per pixel (standard RGB). In this dataset, a wide range of diseases are diagnosed based on retinal blood vessels. In this paper, the focused disease is diabetic retinopathy. The HRF dataset [22] includes 45 images (15 diabetes images of glaucomatous patients and 30 non-diabetes). The language used to develop the proposed method is Python 3.9 and the configuration of the system is: 2.7GHz Quad-Core Intel i7 processor and 16GB 2133MHz LPDDR3 RAM. B. Evaluation Metric and Experimental Results The main idea of the proposed method is based on the segmentation with curvelet saliency. Therefore, the evaluation applies to the JI value. If A is the image segmentation and B is the ground truth image, the JI(A, B) is calculated by (4). The higher the JI value is, the better. JI�A,B� � |?∩A||?∪A| (4) To define the curvelet coefficients that adapt with the salient map more than others, Table I shows the comparison of saliency segmentation by superpixels and other coefficients. It can be seen that the proposed curvelet coefficients give segmentation results near the ground truth of the retinal blood vessels in the HRF dataset. Table II exhibits the comparison results between the proposed method and matched filtering and fuzzy C-means clustering with integrated level set [23] and fully convolutional deep learning [24]. We can see that the average JI of the proposed method is better than the others. This experiment is practiced in HRF with 45 images for evaluation. The ground truth of each fundus image is given clearly in the dataset. Therefore, the JI values are easy to compare. Figure 4 presents some segmentation results of curvelet saliency in the HRF dataset and compares the proposed method with [23] and [24]. TABLE I. AVERAGE JI FOR SALIENCY SEGMENTATION BASED ON OTHER COEFFICIENTS Method Average JI Saliency based on superpixels 85.1 Saliency based on Gaussian filter 90.42 Saliency based on contourlet coefficients 91.37 Proposed method (saliency based on curvelet coefficients) 98.25 Fundus image from HRF dataset Result of [23] Result of [24] Result of the proposed method Ground truth Diabetes image Diabetes image Non-diabetes image Non-diabetes image JI = 91.7 JI = 90.55 JI = 92.72 JI = 93.7 JI = 95.34 JI = 95.84 JI = 94.15 JI = 96.81 JI = 98.62 JI = 98.37 JI = 97.89 JI = 99.04 Fig. 4. Segmentation result comparison in the HRF dataset. Engineering, Technology & Applied Science Research Vol. 12, No. 1, 2022, 8204-8209 8208 www.etasr.com Tuyet et al.: Improving the Curvelet Saliency with Deep Convolutional Neural Networks for Diabetic … TABLE II. AVERAGE JI OF THE PROPOSED METHOD AND OF OTHER SEGMENTATION METHODS Method Average JI [23] 92.03 [24] 96.13 Proposed 98.25 From Tables I-II and Figure 4, the curvelet saliency adapts with segmentation in diabetic diseases. The next evaluation is the input of curvelet saliency for other deep learning models for diabetis classification. Figure 5 shows the comparison chart of the results. In Figure 5, we can see that VGG16 performs better than the other methods known, in terms of classification accuracy for classifying diabetic retinopathy in curvelet coefficients. The comparison was carried out between the proposed method with CNN, Fully Convolutional Network (FCN), and U-Net with VGG16. In the two datasets, the accuracy of VGG16 is higher for classifying diabetic retinopathy. Table III shows the classification results of VGG16 in detecting diabetes and non-diabetes images using Deep CNN VGG-16 and GoogLeNet [25], ResNet50 and VGG-16 pretrained networks [26], and VGG16 in curvelet saliency. Fig. 5. Classification results of diabetic retinopathy of deep learning models based on curvelet saliency. TABLE III. DIABETIC RETINOPATHY CLASSIFICATION RESULTS RESULTS OF VGG16 COMPARISON Dataset Deep CNN VGG-16 and GoogLeNet for diabetic retinopathy detection [25] Diabetic retinopathy classification using ResNet50 and VGG-16 pretrained networks [26] VGG16 in curvelet saliency STARE 94.36 96.01 98.42 HRF 94.1 95.74 97.96 The curvelet coefficients are the main reason for the better results because they enhance the quality and divide the saliency levels. The curves adapt with the curvelet transform. These values make a condition for choosing saliency maps. Therefore, the segmentation of fundus images is better. On the other hand, the multi-level curvelet coefficient for saliency consists of continuing the input of VGG16 with changing the stride and synthesis of multi-model VGG16. As a result, the classifying results are improved. The authors in the other works applied deep learning for classification based on the number of layers or by improving the parameters of the dataset. However, the input parameters are not easy to enhance. If we only focus on information on the surface, it is not enough. The proposed method offers multi-level processing with curvelet coefficients for salient map. The levels of fundus image quality are shown clearly. The deep VGG16 in each saliency level calculates a value for disease classification. IV. CONCLUSION AND FUTURE WORK Diabetes can reduce the red blood cell rate, increase the ability of platelets to agglomerate, and increase blood viscosity. As a result, the capillaries become clogged, causing retinal ischemia. Diagnosis in medicine is a vital task. Any disease prediction or classification system must adapt to the medical images. The retinal vessel images give a wide range of abnormal pixels. Therefore, classifying the diseases from fundus images is a popular research topic. This paper proposes a new method for classifying the diabetic retinopathy in retinal blood vessel images based on curvelet saliency for segmentation. Our approach includes 3 steps: pre-processing of the input images, calculating the saliency map based on curvelet coefficients, and utilizing VGG16 for classification. The choice of the Green color channel combined with enhancement and division level saliency by curve condition of the curvelet transform gave the best segmentation results. The stride and multi-model of the VGG16 was proposed for each saliency level to enhance the results. In future work, configuration and updating the number of layers or blocks in the deep learning models will be considered. ACKNOWLEDGEMENT This research is funded by the Vietnam National University Ho Chi Minh City (VNU-HCM) under grant number B2019- 20-05. We acknowledge the provision of the facilities from Ho Chi Minh City University of Technology (HCMUT) and VNU- HCM. REFERENCES [1] Y. Jiang, N. Tan, T. Peng, and H. Zhang, "Retinal Vessels Segmentation Based on Dilated Multi-Scale Convolutional Neural Network," IEEE Access, vol. 7, pp. 76342–76352, 2019, https://doi.org/10.1109/ ACCESS.2019.2922365. [2] S. Joshi and P. T. Karule, "A review on exudates detection methods for diabetic retinopathy," Biomedicine & Pharmacotherapy, vol. 97, pp. 1454–1460, Jan. 2018, https://doi.org/10.1016/j.biopha.2017.11.009. [3] S. Kumar and B. Kumar, "Diabetic Retinopathy Detection by Extracting Area and Number of Microaneurysm from Colour Fundus Image," in 5th International Conference on Signal Processing and Integrated Networks, Noida, India, Feb. 2018, pp. 359–364, https://doi.org/ 10.1109/SPIN.2018.8474264. [4] H. A. G. Priya, J. Anitha, D. E. Popescu, A. Asokan, D. J. Hemanth, and L. H. Son, "Detection and Grading of Diabetic Retinopathy in Retinal Images Using Deep Intelligent Systems: A Comprehensive Review," Computers, Materials & Continua, vol. 66, no. 3, pp. 2771–2786, 2021, https://doi.org/10.32604/cmc.2021.012907. [5] C. Bhardwaj, S. Jain, and M. Sood, "Diabetic Retinopathy Lesion Discriminative Diagnostic System for Retinal Fundus Images," Engineering, Technology & Applied Science Research Vol. 12, No. 1, 2022, 8204-8209 8209 www.etasr.com Tuyet et al.: Improving the Curvelet Saliency with Deep Convolutional Neural Networks for Diabetic … Advanced Biomedical Engineering, vol. 9, pp. 71–82, 2020, https://doi. org/10.14326/abe.9.71. [6] N. T. Binh, V. T. H. Tuyet, N. M. Hien, and N. T. Thuy, "Retinal Vessels Segmentation by Improving Salient Region Combined with Sobel Operator Condition," in 6th International Conference on Future Data and Security Engineering, Nha Trang City, Vietnam, Nov. 2019, pp. 608–617, https://doi.org/10.1007/978-3-030-35653-8_39. [7] Q. Yan et al., "Automated retinal lesion detection via image saliency analysis," Medical Physics, vol. 46, no. 10, pp. 4531–4544, 2019, https://doi.org/10.1002/mp.13746. [8] N. S. Murthy and B. Arunadevi, "An effective technique for diabetic retinopathy using hybrid machine learning technique," Statistical Methods in Medical Research, vol. 30, no. 4, pp. 1042–1056, Dec. 2021, https://doi.org/10.1177/0962280220983541. [9] A. A. Abdulsahib, M. A. Mahmoud, M. A. Mohammed, H. H. Rasheed, S. A. Mostafa, and M. S. Maashi, "Comprehensive review of retinal blood vessel segmentation and classification techniques: intelligent solutions for green computing in medical images, current challenges, open issues, and knowledge gaps in fundus medical images," Network Modeling Analysis in Health Informatics and Bioinformatics, vol. 10, no. 1, Nov. 2021, Art. no. 20, https://doi.org/10.1007/s13721-021- 00294-7. [10] M. T. Islam, H. R. H. Al-Absi, E. A. Ruagh, and T. Alam, "DiaNet: A Deep Learning Based Architecture to Diagnose Diabetes Using Retinal Images Only," IEEE Access, vol. 9, pp. 15686–15695, 2021, https://doi.org/10.1109/ACCESS.2021.3052477. [11] H. A. Owida, A. Al-Ghraibah, and M. Altayeb, "Classification of Chest X-Ray Images using Wavelet and MFCC Features and Support Vector Machine Classifier," Engineering, Technology & Applied Science Research, vol. 11, no. 4, pp. 7296–7301, Aug. 2021, https://doi.org/ 10.48084/etasr.4123. [12] N. Memari, A. R. Ramli, M. I. B. Saripan, S. Mashohor, and M. Moghbel, "Retinal Blood Vessel Segmentation by Using Matched Filtering and Fuzzy C-means Clustering with Integrated Level Set Method for Diabetic Retinopathy Assessment," Journal of Medical and Biological Engineering, vol. 39, no. 5, pp. 713–731, Jul. 2019, https://doi.org/10.1007/s40846-018-0454-2. [13] B. Gharnali and S. Alipour, "MRI Image Segmentation Using Conditional Spatial FCM Based on Kernel-Induced Distance Measure," Engineering, Technology & Applied Science Research, vol. 8, no. 3, pp. 2985–2990, Jun. 2018, https://doi.org/10.48084/etasr.1999. [14] W. Ding, Y. Sun, L. Ren, H. Ju, Z. Feng, and M. Li, "Multiple Lesions Detection of Fundus Images Based on Convolution Neural Network Algorithm With Improved SFLA," IEEE Access, vol. 8, pp. 97618– 97631, 2020, https://doi.org/10.1109/ACCESS.2020.2996569. [15] Y. Wang and S. Shan, "Accurate disease detection quantification of iris based retinal images using random implication image classifier technique," Microprocessors and Microsystems, vol. 80, Oct. 2021, Art. no. 103350, https://doi.org/10.1016/j.micpro.2020.103350. [16] A. Krestanova, J. Kubicek, M. Penhaker, and J. Timkovic, "Premature infant blood vessel segmentation of retinal images based on hybrid method for the determination of tortuosity," Lékař a technika - Clinician and Technology, vol. 50, no. 2, pp. 49–57, Jun. 2020, https://doi.org/ 10.14311/CTJ.2020.2.02. [17] F. Orujov, R. Maskeliunas, R. Damasevicius, and W. Wei, "Fuzzy based image edge detection algorithm for blood vessel detection in retinal images," Applied Soft Computing, vol. 94, Jun. 2020, Art. no. 106452, https://doi.org/10.1016/j.asoc.2020.106452. [18] D. Maji and A. A. Sekh, "Automatic Grading of Retinal Blood Vessel in Deep Retinal Image Diagnosis," Journal of Medical Systems, vol. 44, no. 10, Jun. 2020, Art. no. 180, https://doi.org/10.1007/s10916-020-01635- 1. [19] K. Oh, H. M. Kang, D. Leem, H. Lee, K. Y. Seo, and S. Yoon, "Early detection of diabetic retinopathy based on deep learning and ultra-wide- field fundus images," Scientific Reports, vol. 11, no. 1, Jan. 2021, Art. no. 1897, https://doi.org/10.1038/s41598-021-81539-3. [20] O. Noah Akande, O. Christiana Abikoye, A. Anthonia Kayode, and Y. Lamari, "Implementation of a Framework for Healthy and Diabetic Retinopathy Retinal Image Recognition," Scientifica, vol. 2020, May 2020, Art. no. e4972527, https://doi.org/10.1155/2020/4972527. [21] A. D. Hoover, V. Kouznetsova, and M. Goldbaum, "Locating blood vessels in retinal images by piecewise threshold probing of a matched filter response," IEEE Transactions on Medical Imaging, vol. 19, no. 3, pp. 203–210, Mar. 2000, https://doi.org/10.1109/42.845178. [22] A. Budai, R. Bock, A. Maier, J. Hornegger, and G. Michelson, "Robust Vessel Segmentation in Fundus Images," International Journal of Biomedical Imaging, vol. 2013, Dec. 2013, Art. no. e154860, https://doi. org/10.1155/2013/154860. [23] N. Memari, A. R. Ramli, M. I. B. Saripan, S. Mashohor, and M. Moghbel, "Retinal Blood Vessel Segmentation by Using Matched Filtering and Fuzzy C-means Clustering with Integrated Level Set Method for Diabetic Retinopathy Assessment," Journal of Medical and Biological Engineering, vol. 39, no. 5, pp. 713–731, Jul. 2019, https://doi.org/10.1007/s40846-018-0454-2. [24] I. Atli and O. S. Gedik, "Sine-Net: A fully convolutional deep learning architecture for retinal blood vessel segmentation," Engineering Science and Technology, an International Journal, vol. 24, no. 2, pp. 271–283, Dec. 2021, https://doi.org/10.1016/j.jestch.2020.07.008. [25] C. Suedumrong, K. Leksakul, P. Wattana, and P. Chaopaisarn, "Application of Deep Convolutional Neural Networks VGG-16 and GoogLeNet for Level Diabetic Retinopathy Detection," in Proceedings of the Future Technologies, Vancouver, BC, Canada, Nov. 2021, pp. 56– 65, https://doi.org/10.1007/978-3-030-89880-9_5. [26] M. Aatila, M. Lachgar, H. Hrimech, and A. Kartit, "Diabetic Retinopathy Classification Using ResNet50 and VGG-16 Pretrained Networks," International Journal of Computer Engineering and Data Science (IJCEDS), vol. 1, no. 1, pp. 1–7, Jul. 2021. [27] S. Xefteris, K. Tserpes, and T. Varvarigou, "A Method for Improving Renogram Production and Detection of Renal Pelvis using Mathematical Morphology on Scintigraphic Images," Engineering, Technology & Applied Science Research, vol. 2, no. 4, pp. 251–258, Aug. 2012, https://doi.org/10.48084/etasr.206.