Microsoft Word - 18-3666_s_ETASR_V10_N4_pp5986-5991 Engineering, Technology & Applied Science Research Vol. 10, No. 4, 2020, 5986-5991 5986 www.etasr.com Saeed: A Machine Learning based Approach for Segmenting Retinal Nerve Images using Artificial … A Machine Learning based Approach for Segmenting Retinal Nerve Images using Artificial Neural Networks Alotaibi Najm Saeed Prince Saud Al Faisal Institute for Diplomatic Studies Saudi Arabia alotaibinajim@gmail.com Abstract-Artificial Intelligence (AI) based Machine Learning (ML) is gaining more attention from researchers. In ophthalmology, ML has been applied to fundus photographs, achieving robust classification performance in the detection of diseases such as diabetic retinopathy, retinopathy of prematurity, etc. The detection and extraction of blood vessels in the retina is an essential part of various diagnosing problems associated with eyes, such as diabetic retinopathy. This paper proposes a novel machine learning approach to segment the retinal blood vessels from eye fundus images using a combination of color features, texture features, and Back Propagation Neural Networks (BPNN). The proposed method comprises of two steps, namely the color texture feature extraction and training the BPNN to get the segmented retinal nerves. Magenta color and correlation- texture features are given as input to the BPNN. The system was trained and tested in retinal fundus images taken from two distinct databases. The average sensitivity, specificity, and accuracy obtained for the segmentation of retinal blood vessels were 0.470%, 0.914%, and 0.903% respectively. Results obtained reveal that the proposed methodology is excellent in automated segmentation retinal nerves. The proposed segmentation methodology was able to obtain comparable accuracy with other methods. Keywords-machine learning; texture feature; retinal nerves; segmentation; neural networks; feature extraction I. INTRODUCTION Utilizing computer-assisted diagnosis of retinal fundus images is becoming an alternative to the manual inspection known as direct ophthalmoscopy. Moreover, computer-assisted diagnosis of retinal fundus images is proven to be as reliable as direct ophthalmoscopy, and requires less time to process and analyze. Various eye related pathologies that can result in blindness, such as macular degeneration and diabetic retinopathy, are routinely diagnosed by utilizing retinal fundus images [1]. One of the fundamental steps in diagnosing diabetic retinopathy is the extraction of retinal blood vessels from fundus images. Although several segmentation methods [2, 3] have been proposed, this segmentation remains challenging due to variations in retinal vasculature network and image quality. Currently, the main challenges in retinal vessel segmentation are the noise (often due to uneven illumination) and thin vessels. Furthermore, the majority of the proposed segmentation methods focus on optimizing the preprocessing and vessel segmentation parameters separately for each dataset. Hence, these approaches can often achieve high accuracy for the optimized dataset, whereas their application to other datasets has reduced accuracy. Although vessel segmentation methods usually contain preprocessing steps aimed at enhancing the appearance of vessels, some approaches skip the preprocessing steps and start with the segmentation step. Nowadays many segmentation methods rely on machine learning [4] concepts combined with traditional segmentation techniques for enhancing the segmentation accuracy, by providing data statistical analysis to support segmentation algorithms. These machine-learning concepts can be broadly categorized into unsupervised and supervised approaches, based on the use of labeled training data. In a supervised approach, each pixel in the image is labeled and assigned to a class by a human operator, i.e. vessel and non-vessel. A series of feature vectors is generated from the data being processed (pixel-wise features in image segmentation problems) and a classifier is trained by using the labels assigned to the data. Unsupervised approaches use predefined feature vectors without any class labels, where similar samples are gathered in distinct classes. This clustering is based on some assumptions about the structure of the input data, i.e. two classes of input data where the feature vectors of each class are similar to each other (vessel and not vessel). Based on the problem, this similarity metric can be complex or defined by a simple metric such as pixel intensities. This paper discusses briefly the retinal vessel segmentation methods to provide some insight into different methods and is by no means an exhaustive review of these methods. For a detailed discussion on different vessel segmentation methods please refer to [5]. Authors in [6] proposed a supervised retina vessel segmentation where a k-Nearest Neighbor (k-NN) classifier was utilized for identifying vessel and non-vessel pixels by using a feature vector based on a multi-scale Gaussian filter. Authors in [7] proposed a similar approach that utilized a feature vector constructed by using a ridge detector. Based on a feature vector constructed by using multi-scale Gabor wavelet filters, authors in [8] proposed the use of a Corresponding author: Alotaibi Najm Saeed Engineering, Technology & Applied Science Research Vol. 10, No. 4, 2020, 5986-5991 5987 www.etasr.com Saeed: A Machine Learning based Approach for Segmenting Retinal Nerve Images using Artificial … Bayesian classifier for segmenting vessel and non-vessel pixels. In [9], a neural network (NN) based classifier was proposed, by utilizing calculated features that use moment- invariant features. Retinal vessel segmentation by using a classifier based on boosted decision trees was proposed in [10]. Meanwhile authors in [11] proposed a classifier by utilizing a support vector machine coupled with features derived that used a rotation-invariant linear operator. The main advantage of using an unsupervised segmentation approach instead of a supervised one is its independence from labeled training data. This can be considered as an important aspect in medical imaging and related applications that often contain large data. Popular unsupervised retinal vessel segmentation methods can be categorized as vessel tracking, matched filtering, and morphology-based methods. Starting from a set of initial points, defined either manually or automatically, vessel-tracking methods try to segment the vessels by tracking their centerline. This tracking can be performed by utilizing different vessel estimation profiles, such as Gaussian [12], generic parametric [13], Bayesian probabilistic [14] and multi-scale profiles [15]. One of the earliest examples of vessel tracking based segmentation methods was proposed in [16], based on the Maximum A Posteriori (MAP) technique. Initial seeding positions corresponding to centerline and vessel edges were determined by using statistical analysis of intensity and vessel continuity properties. Afterwards, Gessel boundaries were estimated by using a Gaussian curve fitting function applied to the vessel cross-section intensity profile. Authors in [17] proposed a similar approach by combining MAP technique with a multi- scale line detection algorithm, and their method was able to handle vessel tree branching and crossover points with good performance. Based on the notion that vessel profile could be modeled by using a kernel (structuring element), filtering concepts try to model and segment the vessels by convolving the retinal image with a 2D rotating template. A rotating template is used to approximate the vessel profile in as many orientations as possible (known as the filter response), with the response being the highest in places where the vessels fit the kernel. Techniques based on filtering utilize different kernels for modeling and enhancing retinal vessels, such as matched filters [18], Gaussian filters [19], wavelet filters [20, 21], Gabor filters [8] and COSFIRE filters [22, 23]. Methods utilizing morphological operations can be used to enhance retinal images for use either with other segmentation methods, or for segmenting blood vessels from the background [24]. Machine-learning algorithms are often utilized as supportive tools to automate and/or enhance most segmentation methods, by providing statistical analysis on a set of data generated by other segmentation methods. Therefore, any existing unsupervised segmentation algorithm could be enhanced by employing and integrating machine-learning concepts. Complex segmentation tasks and problems are usually solved by using a whole pipeline of several segmentation algorithms belonging to various image processing concepts. In this study an automated vessel extraction method in retinal fundus images is proposed, based on a hybrid technique comprising of genetic algorithm enhanced spatial fuzzy c-means algorithm with integrated level set method evaluated on real-world clinical data. Furthermore, a combination of filters is utilized to enhance the segmentation, as each filter responded in a distinct way to different pixels in the image. Considering the different image characteristics between datasets, the segmentation approach can be made more robust by combining filters. As the aim of this study was to propose an optimal segmentation method for use on various datasets, this method was not optimized for any specific dataset. II. PROPOSED METHODOLOGY This section describes the overall proposed process in the segmentation of nerves or the blood vessels in retinal images. Retinal images were used as testing datasets. Figure 1 shows the overall architecture of the proposed methodology for segmenting the retinal nerves. Fig. 1. Overall architecture of the proposed methodology. A. Dataset Retinal images were taken from publicly accessible digital retinal images from DRIVE [4], and STARE [25] databases for the process of nerve segmentation. These datasets are used for developing and testing the performance of various retinal segmentation methods. Datasets drawn manually are called as the ground truth. B. Grey Scale Conversion and Pre-processing Pre-processing is the process of removing the noise and artifacts present in images. A contrast limited adaptive histogram equalization is performed in order to equalize the entire image. A mean filter is utilized to reduce the noise and the artifacts present in the input retinal images. Afterward, the retinal images are converted to grey for further processing. Since the grey scale converted image has various shapeless image boundaries, the pixels outside the image or the nerve boundaries were considered to remove the missing nerves present in the boundary. The original, equalized, grey scale and its pre-processed retinal images are depicted in Figure 2. C. Texture Feature Extraction Texture provides some important features about the structural arrangement of various surfaces. Texture features are used for classifying the possible nerve regions that have been identified with previous processing [26, 27]. Gray Level Co- Occurrence Matrix (GLCM) features were calculated for the regions present inside the retinal image. In general, GLCM creates a grey-co matrix by calculating the frequency of a pixel with grey-level (greyscale intensity) value i occuring horizontally adjacent to a pixel with the value j. Each element Engineering, Technology & Applied Science Research Vol. 10, No. 4, 2020, 5986-5991 5988 www.etasr.com Saeed: A Machine Learning based Approach for Segmenting Retinal Nerve Images using Artificial … (i,j) in GLCM specifies the number of times that the pixel with value i occurs horizontally adjacent to a pixel with value j. The correlation features obtained are shown in Table I. In this method, color and texture features are considered as the input to train the BPNN. (a) (b) (c) (d) Fig. 2. (a) Original, (b) equalized, (c) grey scale, and (d) pre-processed retinal images. D. Correlation Correlation is a measure of how a particular pixel correlates to its neighboring the pixels. The Correlation Cr of an image can be calculated by the following equation. ∑ ���������������,�� � �,� (1) TABLE I. CORRELATION FEATURE VALUES FOR DIFFERENT RETINAL IMAGES 21.4424 24.5872 22.6780 13.6716 22.2245 24.4419 21.4232 21.9096 21.1245 22.7387 22.3906 21.0447 21.5332 24.3650 24.6506 24.4419 21.4232 21.9096 21.1245 23.2861 23.3021 23.9773 22.9697 25.1126 21.8625 E. Color Feature Extraction RGB color format is a widely used format for processing digital images. Its main drawback is that it is not perceptually uniform for all images. Fig. 3. Color formation process. The Hue, Saturation, Value (HSV) representation of RGB color space is compatible with the human perception of color. In this method, in order to obtain the color features, histograms of a square window centering around each pixel on an equidistant grid in each plane of the image were calculated using both LAB and HSV color spaces. A 5×5 window size, used for extracting the mean histograms, was obtained for two image spaces. The process of color formation is shown in Figure 3. F. Constructing Feature Vectors: Color Texture using Neighborhood Statistics Gabor filters are often used for extracting texture features in order to segment images. But Gabor filters have a major setback as they induce a lot of redundancy generating enormous amounts of feature channels. This method proposed a new color and texture feature extraction using the higher order image statistics, defining the texture regularity of the total image with in its neighborhood structures more effectively. An unsupervised learning process was used for recovering image statistics as in [9]. The whole image is considered as a random field X with a set of lattice points S, where {Ss}∈S is the total set of pixels present in the entire image. For extracting this feature, an unsupervised adaptive filter is also used. This improves the probability of the pixel intensities by decreasing their joint entropy hy (X|Y=y), of the conditional probability for each and every neighborhood pair of pixels, (X = x,Y = y). This can be done by changing the entire value of each pixel x present in the center. In each iteration for the entire image region Ζ m , the following equation is computed: ∂h�X|Y � ���/∂�� (2) An image i m+1 is constructed using the finite forward differences method on the gradient descent, with intensities: ���� � �� �� � ���/��� (3) with λ being the time step. Pixel updating process is stopped after few iterations when ||���� ���|| 2< δ, a small threshold. The process of magenta color formation is executed on the entire image for extracting the color texture features [9]. The color formation process is shown in Figure 3. G. Constructing Feature Vectors The mean weighted histogram process performed the feature vector construction. Since color and texture in a color- textured image play complementary roles in image segmentation, this combination enhanced the final segmentation result more accurately. If there are C channels present in N feature histograms, the weighted mean histogram which is computed as channel wise, !� can be computed as: !� � ∑ "� # �$� ��� (4) where wi is the weight which is allocated to each histogram. The mean of histogram is calculated through channel wise weighted mean histograms, H= { !� }(j=1…C). When all color texture features are extracted, it has to inherent similarity, i.e. wi=1⁄n. The features which are obtained from the feature vector construction process using the Color Texture using neighborhood statistics process are given as input to the BPNN. The color texture feature was used in this method since these Engineering, Technology & Applied Science Research Vol. 10, No. 4, 2020, 5986-5991 5989 www.etasr.com Saeed: A Machine Learning based Approach for Segmenting Retinal Nerve Images using Artificial … features represent the color texture of an image more accurately. III. COLOR TEXTURE BASED BPNN The Color and Texture feature based Back Propagation Neural Network (CTBPNN) was trained with pixel values as shown in Figure 4. The NN has an input layer, a hidden type layer Hin(j) and an output layer. The initial seed point is assumed as Xi and the final seed point is assumed as X2. The pixel values as color texture features are given as input to the input layer. The framework of the CTBPNN is as follows. The hidden layer input Hij(j)was defined as: �# # �%� � ∑ &�� #�� # ' (� #) �$� (5) where xi is the input feature, ωij is the weight between the neurons from the input and the hidden layer, and aj represents the threshold. The estimates of CTBPNN are: 2 1 1 ˆ argmin p p j j j j j Y x β β λβ β = = = − +∑ ∑ (6) where, λ is a non-negative regularization parameter, x is the blood vessel width and blood vessel tortuosity of retinal images, Y is the average accuracy and βs is the regression coefficient. The neuron number for the hidden layer is: � * + �1 (7) � * -�� ' +� '( (8) where n, h, and m are the total number of neurons belonging to the input, the hidden, and the output layer respectively, and α is a threshold between 0 and 20. In this work, n was assumed as 80, m was set to 3, and h ranges from 20 to 32. Correlation features were separately calculated for each training and testing phase. The results obtained from various other segmentation methods were compared with the proposed. Correlation features are separately calculated for each training and testing phases. The results obtained from other segmentation methods are compared with the proposed. Fig. 4. Proposed CTBPNN architecture. IV. RESULTS AND DISCUSSION In this research, different retinal images taken two distinct databases were used. Metrics such as sensitivity, specificity, and accuracy were considered for the accuracy estimation process. Sensitivity represents the probability that the segmentation method will correctly identify vessel pixels. Specificity is the probability the segmentation method will correctly identify non-vessel pixels. Accuracy represents the overall performance of a segmentation method. They can be computed using the following equations: Sensitivity � 6� 6��78 (9) Specificity � 68 68�7� (10) Accuracy � 6��68 6��78�68�7� (11) where True Positive (TP) denotes vessel pixels correctly segmented as vessel pixels, and True Negative (TN) denotes non-vessel pixels correctly segmented as non-vessel pixels. False positive (FP) denotes non-vessel pixels segmented as vessel pixels, while false negative (FN) denotes vessel pixels segmented as non-vessel pixels. Segmented results along with original images, pre-processed, border detected, color formation, ground truth and segmented results of the retinal nerve images are shown in Figure 5. The segmented images are very close to the ground truth images. In this approach, images specified by ophthalmologists and doctors are considered as ground truth for the calculation of segmentation accuracy. Original Equalized Pre- processed Color formation Segmented result 1 2 3 4 5 Fig. 5. Segmented retinal vessels. Table II depicts the performance comparison of the proposed with the most recent segmentation methods for STARE and DRIVE datasets. Methods are compared using sensitivity, specificity and accuracy. The proposed method obtained sensitivity, specificity and accuracy of 0.470%, 0.914%, and 0.903% respectively, for different retinal images indicating that it can achieve better segmentation results for Engineering, Technology & Applied Science Research Vol. 10, No. 4, 2020, 5986-5991 5990 www.etasr.com Saeed: A Machine Learning based Approach for Segmenting Retinal Nerve Images using Artificial … segmenting retinal nerves. In general, lower sensitivity and specificity improves the performance of the segmentation algorithm [1]. Since the sensitivity and specificity of the proposed method are lesser than the previous methods, the slighter lower accuracy is not affecting its actual accuracy. The accuracy is slightly lesser than the previous methods since the magenta color formation is done. Moreover, the proposed method gives lesser sensitivity and specificity for both the datasets. The comparison between different retinal nerve segmentation methods is shown in Table III. The proposed method needed 24s in average to segment the retinal nerve, making it comparable to most known methods from the literature. TABLE II. COMPARISON BETWEEN DIFFERENT RETINAL VESSEL SEGMENTATION METHOD USING DRIVE AND STARE DATASETS Method DRIVE Dataset STARE Dataset Sensi tivity Speci ficity Accu racy Sensitiv ity Speci ficity Accu racy Radial projection and semi- supervised method [25] 0.741 0.975 0.943 0.726 0.975 0.949 Gray-level and moment invariants- based features [9] 0.706 0.980 0.945 0.694 0.981 0.952 Combination of detecting the centerlines and morphological reconstruction [28] 0.734 0.976 0.945 0.699 0.973 0.944 Trainable COSFIRE filters [22] 0.766 0.970 0.944 0.763 0.966 0.951 Bit planes and centerline detection[10] 0.715 0.976 0.943 0.722 0.971 0.946 Gray-voting and Gaussian mixture model [29] 0.736 0.972 0.942 0.777 0.955 0.936 Self-adaptive matched filter [30] 0.528 0.959 0.929 0.514 0.939 0.916 Proposed 0.470 0.914 0.903 0.447 0.919 0.911 TABLE III. COMPARISON BETWEEN THE RUNTIME OF DIFFERENT RETINAL VESSEL SEGMENTATION METHODS Method Time to process one image (s) Platform Morphological hessian based approach [31] 150 MATLAB Wavelets and edge location refinement [20] 220 MATLAB Trainable COSFIRE filters [22] 118 MATLAB Gray-voting and Gaussian mixture model [29] 106 MATLAB Line tracking method [32] 93 MATLAB Graph cut approach with retinex and local phase [33] 46 MATLAB and C++ Self-adaptive matched filter [30] 30 MATLAB Proposed 24 MATLAB V. CONCLUSION AND FUTURE WORK This paper proposed a novel method for segmenting nerves of retinal images using the combination of color and texture features with BPNN. Various mages taken from two databases were used for the process of training and testing the NN. This approach improved the segmentation accuracy of nerves in retinal images, by segmenting the exact nerve region present in it. Color and texture features were computed and the obtained seed points were given as input for training and testing the proposed method. Comparing the proposed with various retinal nerves segmentation methods in the literature, the proposed performed well for processing the fundus image in its vessel segmentation with sensitivity, specificity and accuracy of 0.470%, 0.914%, and 0.903% respectively for the DRIVE dataset and 0.447%, 0.919%, and 0.911% respectively for the STARE dataset. This method could segment the nerve region better than other methods in less time. Future enhancements can be the proposal of a novel method for detecting the glaucoma or other abnormalities present in retinal images. REFERENCES [1] A. H. Asad and A. E. Hassaanien, “Retinal Blood Vessels Segmentation Based on Bio-Inspired Algorithm,” in Applications of Intelligent Optimization in Biology and Medicine: Current Trends and Open Problems, A.-E. Hassanien, C. Grosan, and M. Fahmy Tolba, Eds. Cham: Springer International Publishing, 2016, pp. 181–215. [2] B. Gharnali and S. Alipour, “MRI Image Segmentation Using Conditional Spatial FCM Based on Kernel-Induced Distance Measure,” Engineering, Technology & Applied Science Research, vol. 8, no. 3, pp. 2985–2990, Jun. 2018. [3] S. Murawwat, A. Qureshi, S. Ahmad, and Y. Shahid, “Weed Detection Using SVMs,” Engineering, Technology & Applied Science Research, vol. 8, no. 1, pp. 2412–2416, Feb. 2018. [4] Y. L. Ng, X. Jiang, Y. Zhang, S. B. Shin, and R. Ning, “Automated Activity Recognition with Gait Positions Using Machine Learning Algorithms,” Engineering, Technology & Applied Science Research, vol. 9, no. 4, pp. 4554–4560, Aug. 2019. [5] S. D. Solkar and L. Das, “Survey on retinal blood vessels segmentation techniques for detection of diabetic retinopathy,” International Journal of Electronics, Electrical, and Computational Systems, vol. 6, no. 6, 2017. [6] M. Niemeijer, J. Staal, B. van Ginneken, M. Loog, and M. D. Abramoff, “Comparative study of retinal vessel segmentation methods on a new publicly available database,” in Proceedings Medical Imaging 2004: Image Processing, May 2004, vol. 5370, pp. 648–656, doi: 10.1117/12.535349. [7] J. Staal, M. D. Abramoff, M. Niemeijer, M. A. Viergever, and B. van Ginneken, “Ridge-based vessel segmentation in color images of the retina,” IEEE Transactions on Medical Imaging, vol. 23, no. 4, pp. 501– 509, Apr. 2004, doi: 10.1109/TMI.2004.825627. [8] J. V. B. Soares, J. J. G. Leandro, R. M. Cesar, H. F. Jelinek, and M. J. Cree, “Retinal vessel segmentation using the 2-D Gabor wavelet and supervised classification,” IEEE Transactions on Medical Imaging, vol. 25, no. 9, pp. 1214–1222, Sep. 2006, doi: 10.1109/TMI.2006.879967. [9] Z. F. Khan, “Automated Segmentation of Lung Parenchyma Using Colour Based Fuzzy C-Means Clustering,” Journal of Electrical Engineering & Technology, vol. 14, no. 5, pp. 2163–2169, Sep. 2019, doi: 10.1007/s42835-019-00224-8. [10] M. M. Fraz et al., “An approach to localize the retinal blood vessels using bit planes and centerline detection,” Computer Methods and Programs in Biomedicine, vol. 108, no. 2, pp. 600–616, Nov. 2012, doi: 10.1016/j.cmpb.2011.08.009. [11] E. Ricci and R. Perfetti, “Retinal Blood Vessel Segmentation Using Line Operators and Support Vector Classification,” IEEE Transactions on Medical Imaging, vol. 26, no. 10, pp. 1357–1365, Oct. 2007, doi: 10.1109/TMI.2007.898551. [12] Huiqi Li, W. Hsu, Mong Li Lee, and Tien Yin Wong, “Automatic grading of retinal vessel caliber,” IEEE Transactions on Biomedical Engineering, vol. 52, no. 7, pp. 1352–1355, Jul. 2005, doi: 10.1109/TBME.2005.847402. [13] Liang Zhou, M. S. Rzeszotarski, L. J. Singerman, and J. M. Chokreff, “The detection and quantification of retinopathy using digital angiograms,” IEEE Transactions on Medical Imaging, vol. 13, no. 4, pp. 619–626, Dec. 1994, doi: 10.1109/42.363106. [14] Y. Yin, M. Adel, and S. Bourennane, “Retinal vessel segmentation using a probabilistic tracking method,” Pattern Recognition, vol. 45, no. 4, pp. 1235–1244, Apr. 2012, doi: 10.1016/j.patcog.2011.09.019. Engineering, Technology & Applied Science Research Vol. 10, No. 4, 2020, 5986-5991 5991 www.etasr.com Saeed: A Machine Learning based Approach for Segmenting Retinal Nerve Images using Artificial … [15] O. Wink, W. J. Niessen, and M. A. Viergever, “Multiscale vessel tracking,” IEEE Transactions on Medical Imaging, vol. 23, no. 1, pp. 130–133, Jan. 2004, doi: 10.1109/TMI.2003.819920. [16] Y. Yin, M. Adel, and S. Bourennane, “Automatic segmentation and measurement of vasculature in retinal fundus images using probabilistic formulation,” Computational and Mathematical Methods in Medicine, vol. 2013, Art no. 260410, 2013, doi: 10.1155/2013/260410. [17] J. Zhang, H. Li, Q. Nie, and L. Cheng, “A retinal vessel boundary tracking method based on Bayesian theory and multi-scale line detection,” Computerized Medical Imaging and Graphics, vol. 38, no. 6, pp. 517–525, Sep. 2014, doi: 10.1016/j.compmedimag.2014.05.010. [18] B. Zhang, L. Zhang, L. Zhang, and F. Karray, “Retinal vessel extraction by matched filter with first-order derivative of Gaussian,” Computers in Biology and Medicine, vol. 40, no. 4, pp. 438–445, Apr. 2010, doi: 10.1016/j.compbiomed.2010.02.008. [19] L. Gang, O. Chutatape, and S. M. Krishnan, “Detection and measurement of retinal vessels in fundus images using amplitude modified second-order Gaussian filter,” IEEE Transactions on Biomedical Engineering, vol. 49, no. 2, pp. 168–172, Feb. 2002, doi: 10.1109/10.979356. [20] P. Bankhead, C. N. Scholfield, J. G. McGeown, and T. M. Curtis, “Fast Retinal Vessel Detection and Measurement Using Wavelets and Edge Location Refinement,” PLOS ONE, vol. 7, no. 3, 2012, doi: 10.1371/journal.pone.0032435, Art no. e32435. [21] Y. Wang, G. Ji, P. Lin, and E. Trucco, “Retinal vessel segmentation using multiwavelet kernels and multiscale hierarchical decomposition,” Pattern Recognition, vol. 46, no. 8, pp. 2117–2133, Aug. 2013, doi: 10.1016/j.patcog.2012.12.014. [22] G. Azzopardi, N. Strisciuglio, M. Vento, and N. Petkov, “Trainable COSFIRE filters for vessel delineation with application to retinal images,” Medical Image Analysis, vol. 19, no. 1, pp. 46–57, Jan. 2015, doi: 10.1016/j.media.2014.08.002. [23] N. Memari, A. R. Ramli, M. I. B. Saripan, S. Mashohor, and M. Moghbel, “Supervised retinal vessel segmentation from color fundus images based on matched filtering and AdaBoost classifier,” PLOS ONE, vol. 12, no. 12, 2017, doi: 10.1371/journal.pone.0188939, Art no. e0188939. [24] B. Fang, W. Hsu, and M. L. Lee, “Reconstruction of vascular structures in retinal images,” in Proceedings 2003 International Conference on Image Processing (Cat. No.03CH37429), Sep. 2003, vol. 2, pp. II–157, doi: 10.1109/ICIP.2003.1246640. [25] X. You, Q. Peng, Y. Yuan, Y. Cheung, and J. Lei, “Segmentation of retinal blood vessels using the radial projection and semi-supervised approach,” Pattern Recognition, vol. 44, no. 10, pp. 2314–2324, Oct. 2011, doi: 10.1016/j.patcog.2011.01.007. [26] R. M. Haralick, K. Shanmugam, and I. Dinstein, “Textural Features for Image Classification,” IEEE Transactions on Systems, Man, and Cybernetics, vol. SMC-3, no. 6, pp. 610–621, Nov. 1973, doi: 10.1109/TSMC.1973.4309314. [27] S. Hwang and M. Emre Celebi, “Texture Segmentation of Dermoscopy Images using Gabor Filters and G-Means Clustering,” in IPCV 2010 : Proceedings of the 2010 International Conference on Image Processing, Computer Vision, & Pattern Recognition, 2010, pp. 882–886, [Online]. Available: http://pascal-francis.inist.fr/vibad/index.php?action=getReco rdDetail&idt=26052459. [28] A. M. Mendonca and A. Campilho, “Segmentation of retinal blood vessels by combining the detection of centerlines and morphological reconstruction,” IEEE Transactions on Medical Imaging, vol. 25, no. 9, pp. 1200–1213, Sep. 2006, doi: 10.1109/TMI.2006.879955. [29] P. Dai et al., “A New Approach to Segment Both Main and Peripheral Retinal Vessels Based on Gray-Voting and Gaussian Mixture Model,” PLOS ONE, vol. 10, no. 6, Art no. e0127748, 2015, doi: 10.1371/journal.pone.0127748. [30] T. Chakraborti, D. K. Jha, A. S. Chowdhury, and X. Jiang, “A self- adaptive matched filter for retinal blood vessel detection,” Machine Vision and Applications, vol. 26, no. 1, pp. 55–68, Jan. 2015, doi: 10.1007/s00138-014-0636-z. [31] K. BahadarKhan, A. A. Khaliq, and M. Shahid, “A Morphological Hessian Based Approach for Retinal Blood Vessels Segmentation and Denoising Using Region Based Otsu Thresholding,” PLOS ONE, vol. 11, no. 7, 2016, doi: 10.1371/journal.pone.0158996, Art no. e0158996. [32] M. Vlachos and E. Dermatas, “Multi-scale retinal vessel segmentation using line tracking,” Computerized Medical Imaging and Graphics, vol. 34, no. 3, pp. 213–227, Apr. 2010, doi: 10.1016/j.compmedimag.2009.09.006. [33] Y. Zhao, Y. Liu, X. Wu, S. P. Harding, and Y. Zheng, “Retinal Vessel Segmentation: An Efficient Graph Cut Approach with Retinex and Local Phase,” PLOS ONE, vol. 10, no. 4, 2015, doi: 10.1371/journal.pone.0122332, Art no. e0122332