Engineering, Technology & Applied Science Research Vol. 9, No. 1, 2019, 3807-3813 3807 www.etasr.com Alsubari et al.: Composite Feature Extraction and Classification for Fusion of Palm-Print and … Composite Feature Extraction and Classification for Fusion of Palm-Print and Iris Biometric Traits Akram Alsubari School of Computer Sciences, KBC North Maharashtra University, Jalgaon, India akram.alsubari87@gmail.com Mohammad Eid Alzahrani Department of Computer Science AlBaha University AlBaha, Saudi Arabia meid@bu.edu.sa Shaikh Abdul Hannan Department of Computer Science AlBaha University AlBaha, Saudi Arabia abdulhannan05@gmail.com R. J. Ramteke School of Computer Sciences, KBC North Maharashtra University, Jalgaon, India rakeshj.ramteke@gmail.com Abstract—Palm-print and iris biometric traits fusion are implemented in this paper. The region of interest (ROI) of a palm is extracted by using the valley detection algorithm and the ROI of an iris is extracted based on the neighbor-pixels value algorithm (NPVA). Statistical local binary pattern (SLBP) is applied to extract the local features of palm and iris. For enhancing the palm features, a combination of histogram of oriented gradient (HOG) and discrete cosine transform (DCT) is applied. Gabor-Zernike moment is used to extract the iris features. This experimentation was carried out in two modes: verification and identification. The Euclidean distance is used in the verification system. In the identification system, the fuzzy- based classifier was proposed along with built-in classification functions in MATLAB. CASIA datasets of palm and iris were used in this research work. The proposed system accuracy was found to be satisfactory. Keywords-palm-print; iris; valley detection; SLBP; HOG; DCT; Zernike moment; fuzzy classifier I. INTRODUCTION Pattern recognition classifies data based on already gained knowledge [1], it is a process of understanding the class to which an object/pattern belongs. Those patterns/objects can be 1D e.g. signals or 2D e.g. images. Biometrics is an application of pattern recognition regarding analyzation and measurements of human body characteristics. There are three types of biometrics: physical, behavioral and cognitive. Physical biometrics deal with the bodily parts like face, fingerprints, palm etc. Behavioral biometrics considers the activities of the body like gait, keystroke, and voice. Cognitive biometrics regard every human nervous tissue and its response to signals like electrocardiogram (ECG), electromyogram (EMG), electroencephalogram (EEG) and electrodermal response (EDR). In this paper, two physical biometrics have been considered (palm-print and iris). Iris looks like a ring inside the eye, and it controls the intensity of light entering through the pupil, the radius of the iris is increasing and decreasing according to illumination. There are many features on the iris, such as tiny crypts, nevi anterior border layer collarette, texture features, etc. [2]. In the same person, the right iris has different features than the left. Iris is one of the touchless biometrics. The palm is the region on the human hand between the wrist and root-fingers. Authors in [3] were the first who described the palm as one type of biometric traits in 1998. Biometric systems can be classified into two types: verification and identification. In the verification mode, biometric data are compared against the same data to decide whether it is accepted or rejected. In the identification mode, biometric data is used to identify the ID/name of the person. Table I explains the difference between the verification and identification in a biometric system. TABLE I. VERIFICATION VS IDENTIFICATION Verification Identification One to one search One to many searches Answer the question “Can you prove who you are?” Answer the question: “Who are you?” ID is needed as manually input ID is automatically recognized No need for classification Classification is required Less time for verification More time for identification Higher accuracy Lower accuracy than verification systems Access control, fingerprint in mobile phones Attendance systems, face recognition II. LITERATURE REVIEW Authors in [4] proposed a new method to extract palm features named as the histogram of oriented lines (HOL). HOL is based on the histogram of oriented gradient (HOG) [5] which was proposed in 2005 for human detection. HOL is not sensitive to light changing and is used to extract palm lines Corresponding author: A. Alsubari Engineering, Technology & Applied Science Research Vol. 9, No. 1, 2019, 3807-3813 3808 www.etasr.com Alsubari et al.: Composite Feature Extraction and Classification for Fusion of Palm-Print and … along with the orientation features. Euclidean distance was used in the verification and identification system for the palm- print trait in two databases. In the verification mode, the error rates were computed as 0.31% and 0.64%. In the identification system, the recognition rates were calculated based on the matching samples from the testing templates and were found to be 99.97% and 100%. Authors in [6] presented face recognition based on global and local features. In global features, the Zernike moment is applied to the output of a Gabor filter. For extracting the local features of the face image, HOG is applied on the face image. Thus, the global and local features were combined for producing a feature vector of the face image. Nearest neighbor classification based on the Euclidean distance was used for the matching process. The accuracies were 97.8%, 98% and 97.7% in three databases. Authors in [7] proposed the magnitudes of the Zernike moments for generating the feature vector of the face images. They experimented in 2D and 3D face images. The features were selected on the basis of the order of the Zernike moments, so the length of the feature vector varied from 16 to 42. The dataset was divided into 60% for training and 40% for testing. Authors used the neural network tools in Matlab (nprtool) for the matching process. In the ORL database, the highest recognition rate was 99.5% with 36 features by using the radial basis function neural network (RBFNN). In the 3D face database, the best achievement was 99.71% with 42 features by using multilayer perceptron neural network (MLPNN). Authors in [8] extracted iris and palm-print features by applying Gabor filter with 4 orientations and 3 spatial frequencies. They fused the palm and iris after the features were described by a Gabor filter. The output of the Gabor filter for the palm and iris images was integrated into a single matrix by using a second level wavelet transform. In other words, second wavelet transform was applied on palm and iris images and the result was combined in one matrix for generating the feature vector. The fusion system accuracy was 99.2% by using the KNN classifier. Authors in [9] presented a multimodal of iris and palm-print by extracting global features using wavelet transform. Haar wavelet transform was used to extract iris features at four levels. For extracting the palm-print features, Daubechies wavelet transform was applied at four levels as well. The Hamming distance was utilized in the matching phase for the iris modal and palm-print modal separately. Authors proposed three different levels of fusion for the iris and palm-print biometric traits: feature level, score level and decision level and they compared the results. At the feature level, the feature vectors of the iris and palm were combined in a single vector for generating a feature vector of the iris and palm-print. In case of the score level fusion, the feature vectors of iris and palm-print were evaluated separately/individually and based on each feature vector score, the decision was made by using a weighted sum-rule. Finally, at the decision level, the fusion was based on the error rate from the feature vectors of iris and palm-print. Authors in [10] presented the fuzzy rule- based classification for identifying the normal sperm from the abnormal. III. METHODOLOGY In this paper, the fusion of palm-print and iris biometric traits were integrated at the feature level as shown in Figure 1. The palm is segmented from the rest parts of the hand by using an algorithm from our previous work [11]. Iris was localized by using the NPVA [12]. Texture features of iris and palm were extracted by computing the mean and standard deviation of the local binary pattern (LBP). DCT and HOG were applied together for extracting the palm texture features. Gabor filter with 8 orientations and 5 scales was applied on the ROI of iris for generating 40 sub-images. Then, the Zernike moment was applied to each sub-image to generate the feature vector of the iris. The Gaussian fuzzy membership function is used as a classifier in the identification system. Proposed system for palm-print and iris biometric traits A. Pre-Processing 1) ROI Extraction Palm ROI was extracted by using the valley detection algorithm [13] and thresholding segmentation. The algorithm steps to extract the palm ROI are: • The hand-image is binarized to segment the hand object from the background. • Valley-points between the fingers are to be detected by using the method in [11]. To detect the valley- points/reference-points, the four conditions in valley detection algorithm should be satisfied. • From the reference-points, the coordinate system of the palm region is established with the help of the following equations: 𝑃1 = 𝑟 × [cos⁡( 1 2 𝜋 − 𝜃)⁡⁡sin⁡( 1 2 𝜋 − 𝜃)] + RP1⁡ (1) 𝑃2 = 𝑟 × [cos⁡( 1 2 𝜋 + 𝜃)⁡⁡sin⁡( 1 2 𝜋 + 𝜃)] + ⁡RP3 (2) 𝑑 =⁡√(𝑥2 − 𝑥1) 2 + (𝑦2 − 𝑦1) 2 (3) 𝑃3 = 𝑑 × [cos⁡( 1 2 𝜋 + 𝜃)⁡⁡sin⁡( 1 2 𝜋 + 𝜃)] + P1 (4) 𝑃4 = 𝑑 × [cos⁡( 1 2 𝜋 + 𝜃)⁡⁡sin⁡( 1 2 𝜋 + 𝜃)] + P2 (5) where the locations of points P1-P4 and RP1-RP4 are mentioned in Figure 2(a), 𝑟 is the radius of around valley-point, 𝑑 is the pixel distance from P1 to P2 and this distance will be Engineering, Technology & Applied Science Research Vol. 9, No. 1, 2019, 3807-3813 3809 www.etasr.com Alsubari et al.: Composite Feature Extraction and Classification for Fusion of Palm-Print and … the edges-value for ROI of the palm region. The slope between P1 and P2 was computed to obtain the angle rotation of the palm image. • The final step is to rotate and extract the palm region to a fixed size image. For extracting the iris region, NPVA was proposed for calculating the radius and center values of the iris. The NPVA steps are: • Sobel operator was applied to the eye-image to detect the boundaries of the pupil or to detect the inner boundaries of the iris. • Dilation operator is applied for connecting the pixel gap in the boundaries. • Circle Hough transform [14] is applied to find the radius and center of the pupil circle. • For detecting the outer boundaries of the iris, the three neighboring pixels from the pupil boundaries were computed in horizontal-left and horizontal-right as shown in Figure 2(b). If those pixels’ intensity values were found to be more than 200 or closer to the white color, then the skipped pixels obtain the iris radius value. After localization of the iris, the pixel intensity values which are close to zero or close to 255 will be avoided and it will be considered as noise (sclera, eyelids and eyelashes). After the iris localization, the iris region is converted into a rectangular form by using the rubber sheet normalization [15]. The dimensionality of iris images was resized to 64×384 and palm images were resized to 64×64. (a) The location of reference and ROI points, (b) The direction of computing NPVA 2) Enhancement Three enhancement techniques were applied on the ROI of the palm: Contrast-limited adaptive histogram equalization (CLAHE) [16], Sobel filter and Gaussian filter mask. a) CLAHE This method is based on local histogram equalization (LHE) and has three steps for image enhancing [17]: • Divide the image into not overlapping sub-blocks. • Each sub-block will be enhanced individually. • For reducing the sub-block artifacts, interpolation operation is applied. b) Sobel Mask: After enhancing the contrast of palm images by using the CLAHE, Sobel mask is applied for filtering those images and to obtain palm lines and wrinkles. Sobel mask filter can be used as feature extraction technique because it is dealing with edges of the image but, in this paper, we considered it in enhancement approach because it shows the palm’s principal lines and wrinkles: 𝐺𝑥 = [ +1 0 −1 +2 0 −2 +1 0 −1 ] ∗ ⁡𝐼𝑚𝑔 (6) 𝐺𝑦 = [ +1 +2 +1 0 0 0 −1 −1 −1 ] ∗ 𝐼𝑚𝑔 (7) where Img is the palm image, Gx contains the horizontal derivative approximation and Gy contains the vertical derivative approximation. In this experiment, the vertical of Sobel mask is selected to detect and enhance the features of the palm images [18]. c) Gaussian Mask The last step in the palm enhancement is to apply a smoothing filter of Gaussian for reducing the noise of palm image. Histogram equalization (HE) is applied on the ROI of iris for the enhancing the contrast of the iris pixels. HE is distributing the intensity values in the range 0-255. It is a powerful image processing technique. The effect of HE on the ROI of the iris is shown in Figure 3. Histogram equalization of iris B. Feature Extraction The concept of feature extraction in image processing is to convert the matrix (image) into a set of mathematical parameters which will be in vector form (feature vector) or template features. Those features can be edges, lines in the palm, minutia in a fingerprint, neighbor pixel-based (LBP features), texture features or shape-based features etc. In this experiment, the texture features of the palm and iris were extracted by using these techniques: 1) SLBP LBP was introduced in [19] where it was used as a filter for palm and iris images. LBP is applied in each pixel of an image except the first and last row and column. LBP filter compares the pixel against its 8 neighboring values. If the pixel intensity value of the neighbor is greater than or equal to the central pixel, it will be assigned as 1, otherwise, it will be assigned as 0. From the LBP filter, the output of the previous process will Engineering, Technology & Applied Science Research Vol. 9, No. 1, 2019, 3807-3813 3810 www.etasr.com Alsubari et al.: Composite Feature Extraction and Classification for Fusion of Palm-Print and … be 8 binary numbers. These will be converted to a decimal number and will be replaced with the center pixel as shown in Figure 4. In other words, the central pixel is the threshold value for its neighbor-pixels [20]. LBP filtering For obtaining the SLBP, the mean and standard deviation were calculated from the LBP matrix. Those values were composted for generating the feature vector of SLBP. The mean equation is: 𝑀 = 1 𝑁 ∑ 𝐴𝑖 𝑁 𝑖=1 (8) where the standard deviation equation is: 𝑆 =⁡√ 1 𝑁 ∑ |𝐴𝑖 − 𝑀| 𝑁 𝑖=1 2 (9) SLBP method is a common feature extraction technique for palm and iris. Because SLBP extracts only local features of palm and iris, different methods were adopted to enhance the features of palm and iris. 2) Composite of DCT and HOG DCT is an image transform which converts signals/pixels to the frequency domain. DCT transforms can be used to extract the edges from the palm image. Where the palm has edge features (principal lines and wrinkles etc.), then the HOG and DCT were proposed to extract the features of the palm. DCT is computed with the following equations: 𝑓(𝑎) = w(𝑎)∑ 𝑦(𝑛)𝑐𝑜𝑠 ( 𝜋 2𝑛 (2𝑛 − 1)(𝑎 − 1)) 𝑁 𝑛=1 (10) 𝑤(𝑎) = { 1 √𝑁 , 𝑎 = 1 √ 2 𝑁 , 2 ≤ 𝑎 ≤ 𝑁 (11) where, 𝑎=1,2, 3...N, and N is the length of 𝑦. 𝑎 and 𝑦 are of the same size. 𝑎 and N are indexed from 𝑎=1, N=1 because Matlab vectors run from 1. The output of DCT is having negative and positive values, so the negative values will be converted into positive values. HOG was introduced and proposed in 2005 [5] for the purpose of human detection. In this experiment, HOG is used for extracting the texture feature of palm. Thus, the combination of DCT and HOG is proposed for extracting palm features. DCT is used to extract the edges from the image, so it was applied in the palm images. After that, HOG was applied to the DCT output. HOG algorithm steps are: • The palm image is divided into cells and blocks, where the cell is 8×8 pixels and the block size is 2×2 cells which means the block size is 16×16 pixels • 50% overlapping was assigned from the next block as shown in Figure 5. So, the total number of blocks in each palm image is 7×7=49. • Nine directions were selected from the range 0o-180o. The orientations and gradient magnitudes were computed by the following equations: 𝑑𝑥 = 𝐼(𝑥 + 1,𝑦) − 𝐼(𝑥 − 1,𝑦)⁡⁡⁡⁡ (12) 𝑑𝑦 = 𝐼(𝑥,𝑦 + 1) − 𝐼(𝑥,𝑦 − 1)⁡ (13) 𝑚(𝑥,𝑦) = √𝑑𝑥2 + 𝑑𝑦2 (14) • The histograms of each cell with respect to 9 bins direction were computed. Total no of features=NB×CB×P (15) where, NB is the total of blocks number in the palm-image, CB is the cells number in each block which are four cells in each block, P is the bins orientation which is assigned by 9. From (14), the total number of HOG features are 49×4×9=1764. Blocks and cells in the palm image 3) Composite of Gabor Filter and Zernike Moment The idea behind this technique is to combine the Gabor filter with the Zernike moments for extracting texture features of an iris image. The combination of Zernike moments and Gabor filter achieved a satisfactory result in the face and iris traits. 2D Gabor filters [22] are widely used in texture analysis, edge detection, feature extraction, etc. In this implementation, the Gabor filter was applied on the iris image with respect to 8 orientations and 5 scales, thus 40 sub-images were generated from the Gabor filter. Therefore, Zernike moments [23] technique was applied to each sub-image of iris with different orders and repetitions. Four features were selected from each sub-image. The length of the feature vector of this technique is 160. The Zernike moments of order n with repetition m is obtained by (16)-(18): 𝑍𝑛𝑚 = 𝑛+1 𝜋 ∑ ∑ 𝑓(𝑥,𝑦)𝑉𝑛𝑚𝑦𝑥 (𝑥,𝑦), x 2 + y2 ≤ 1 (16) where Zernike moments 𝑍𝑛𝑚on rotated image have the same magnitudes. Therefore, | 𝑍𝑛𝑚 | is the rotation invariance of features. 𝑉𝑛𝑚(𝑥,𝑦) is a Zernike polynomial on a unit circle 𝑥2 + 𝑦2 ≤ 1. 𝑉𝑛𝑚(𝑥,𝑦)=𝑉𝑛𝑚(𝜌,𝜃) = 𝑅𝑛𝑚(𝜌)𝑒𝑥𝑝⁡(𝑗𝑚𝜃) (17) Engineering, Technology & Applied Science Research Vol. 9, No. 1, 2019, 3807-3813 3811 www.etasr.com Alsubari et al.: Composite Feature Extraction and Classification for Fusion of Palm-Print and … 𝑅𝑛𝑚(𝜌) = ∑ (−1) 𝑠 𝑛−|𝑚| 2 𝑠=0 (𝑛−𝑠)! 𝑠!( 𝑛−|𝑚| 2 −𝑠)!( 𝑛−|𝑚| 2 −𝑠)! 𝜌𝑛−2𝑠 (18) where, 𝑛 is a non-negative integer which represents the order of Zernike moment and 𝑚 represents the repetition. Satisfying the conditions 𝑛 − 𝑚 is even and |⁡𝑚|≤ 𝑛. Also 𝜌 = √𝑥2 + 𝑦2 and = 𝑡𝑎𝑛−1 𝑦 𝑥 . C. Fusion The fusion is to integrate the palm and iris biometric traits. In this experiment, the fusion was done at the feature level. The feature vectors of iris and palm were composited to generate a single vector. Before compositing the vectors, max normalization was adopted for making the values of the vector in the same range [0-1]. The max normalization steps are: • Convert the feature vector values into integer values. • Find the maximum value of the feature vector of iris and palm. • Divide the feature vector with the maximum values of iris and palm respectively. • Composite the feature vectors of iris and palm in a single vector. The numbers of extracted features are shown in Table II. TABLE II. NUMBER OF FEATURES Techniques SLBP features DCT-HOG features Gabor-Zernike moment features Fusion features Biometric Traits Palm-Print 124 1764 ---- 1888 Iris 124 ----- 160 284 Fusion of palm- print and iris 2172 D. Matching The proposed system was evaluated in two different ways, verification and identification. 1) Verification The verification mode is a one-to-one system and it is more used than the the identification system. For biometric verification, the matching algorithm was developed based on the Euclidean distance. Its steps are: • Enroll the whole features and labels of the biometric trait in a matrix form respected to their labels/classes. • Divide the features into training vectors and testing vector for each person separately. • Calculate the Euclidean distance between the training vectors to each other for the specific person. Then, find the maximum distance value - Ma. • Compute the Euclidean distance of the testing vector against the training vectors of the same label and those values are assigned in a different variable - Me. Apply this process for all persons. • If Me