Начиная с начала 2000 года осуществляется внедрение GHIS в здравоохранении, в рамках принятого проекта о реформирование информ Mathematical Problems of Computer Science 42, 85--96, 2014. 85 Palm Vein Minutiae Feature Extraction for Human Identification Sergey S. Chidemyan 1 , Aram H. Jivanyan 2 and Gurgen H. Khachatryan 2 1 Russian-Armenian (Slavonic) University 2 American University of Armenia e-mail: serchch@gmail.com, ajivanyan@aua.am, gurgenkh@aua.am Abstract This paper presents the approach for extracting the minutiae features from palm- vein images that can be useful for biometric purposes. In this paper we will show how the extraction of palm-vein features can be made efficiently and accurately using this approach, particularly, how problems of potential deformations, rotational and translational changes are accommodated by this approach. As minutiae features extracted from palm-veins, bifurcation and ending points are chosen. Analysis of a database shows that there are 25 minutiae features in each palm-vein image. The experimental results show that the quantity of minutiae features in each vein pattern is enough to perform the personal identification task. Keywords: Vein Pattern, Palm Vein, Biometrics, Minutiae Features, Personal Identification 1. Introduction Palm vein human identification technology is based on pictures of palms that are made by infrared cameras. Vein pattern is formed because hemoglobin absorbs infrared radiation. As a result, the reflectance is reduced and the veins are visible as black lines. The vein pattern is a network of vessels underneath human skin. Like fingerprints, it is considered that the location of vein patterns in the same part of palm is distinct for each person. Taking this fact into consideration and the facts that vein patterns are stable over a long period of time and, what is more important, are invisible to the human eyes (what makes it harder, if not impossible, to copy these features), vein patterns can be considered as very useful biometric features for human identification problem. There are four processing steps to extract the final features: Palm-Vein database acquisition, Image Enhancement, Vein Pattern extraction and Feature Extraction (Fig.1). During the image acquisition stage, CASIA Multi-Spectral Palmprint Image Database V1.0 (CASIA database) is used. This CASIA database has been acquired using a contactless imaging device and has images from 100 users. Six images were acquired from each user and these images were acquired in two different data acquisition sessions (three images in each mailto:serchch@gmail.com mailto:ajivanyan@aua.am mailto:gurgenkh@aua.am Palm Vein Minutiae Feature Extraction for Human Identification 86 session) with a minimum interval of one month. Since palm veins are most visible under 850 nm wavelength illuminations, only the images under these wavelength illuminations are chosen. In this paper an approach of image enhancement, vein pattern extraction and features extraction is presented. The rest of the paper is organized in the following manner. In Section 2 the problems that arise during preprocessing of infrared images and the methods to solve these problems are discussed. This includes ROI (region of interest) selection, image quality enhancement and segmentation of vein pattern. In section 3 feature extraction problem is discussed and the method used to solve this problem isdescribed. In section 4 some experimental results are introduced. Finally, in section 5, conclusions are given with some discussions. Fig .1 Feature Extraction Steps. 2. Preprocessing Methods of Infrared Images 2.1 Region of Interest Extraction At this step our purpose is to segment the rectangular region in hand images that defines the region of interest (ROI) which contains the main vein pattern on the palm. The main problem that arises during the process of ROI segmentation is how automatically to normalize the image in such a way that potential deformations, rotational and translational changes, caused by interactions of user with the device (NIR camera), can be minimized. To solve this problem it is necessary to create a coordinate system that is invariant to each of variations mentioned above. It is a good idea to associate the ROI with palm itself, so the two reference points are chosen for building such a coordinate system: the web between the index finger and middle finger together with the web between the ring finger and the little finger (Fig. 2(c)). To locate the ROI the contour of the hand is extracted using any of the filters suitable for this purpose, like Sobel, Canny, Prewitt or Roberts filters (Fig. 2(a)). Then the mid-point of the hand is defined and the distance between each point on hand contour and mid-point is calculated. Reference points can be found according to distance profile based on information about these distances (Fig. 2(b)). Having reference points P1 and P2, the rectangular region of interest can be defined as follows: 𝐿𝑝1𝑝3 = 𝐿𝑝1𝑝2 ∗ 𝛼, where 𝐿𝑝1𝑝2– a distance between 𝑃1 and 𝑃2 and 𝛼 is defined experimentally from database of acquired images. Our experiments show that the common value of α is 1.36. (Fig. 2(d)). Vein Feature Extraction Unprocessed images Vein Pattern Extraction ROI selection Image enhancement Enhanced images CASIA DB Feature Extraction ROI selection Image enhancement S. Chidemyan, A. Jivanyan and G. Khachatryan 87 a c b d Fig. 2. ROI extraction. (a) Image after edge detection using Sobel Filter. (b) Distance profile between the mid-point and contour points. (c) Valley points detecting. (d) Final ROI. 2.2 Image Enhancement Images in our database are under near-infrared illumination, which affects on their quality. Particularly, they appear darker with low contrast and need to be adjusted. For this purpose a 5 × 5 median filter is used to remove the speckling noise from the image. Median filtering is a standard procedure for denoising of images. The main idea behind it is to run through the image pixel by pixel, replacing each pixel with the median of neighboring pixels. To solve the problem with high frequency noise 10 × 10 Wiener filter is used. For image I by estimating local mean and variance in 𝑀 × 𝑁 window around each pixel, the Wiener filter can be defined as follows: 𝑊𝑖𝑒𝑛𝑒𝑟(𝑖 , 𝑗) = 𝜇 + 𝜎2−𝑣2 𝜎2 (𝐼(𝑖, 𝑗) − 𝜇), (1) where 𝜇 = 1 𝑀𝑁 ∑ ∑ 𝐼(𝑖, 𝑗)𝑁𝑗 𝑀 𝑖 , 𝜎2 = 1 𝑀𝑁 ∑ ∑ (𝐼(𝑖, 𝑗)2 − 𝜇2)𝑁𝑗 𝑀 𝑖 and 𝑣 2 is the average of all the variances. After removing the high frequency noise the last noise removal filter is used: anisotropic diffusion filter. We use this filter to preserve the edges of vein pattern that should be extracted. Anisotropic diffusion is a technique aiming at reducing the image noise without removing significant parts of the image content, typically the details that are important for the D is ta n c e f ro m m id -p o in t The Position of the pixel in the Hand ontour http://en.wikipedia.org/wiki/Median http://en.wikipedia.org/wiki/Image_noise Palm Vein Minutiae Feature Extraction for Human Identification 88 interpretation of the image. Anisotropic diffusion resembles the process that creates a scale space, where an image generates a parameterized family of successively more and more blurred images based on a diffusion process. Each of the resulting images in this family is given as a convolution between the image and a 2D isotropic Gaussian filter, where the width of the filter increases with the parameter. This diffusion process is a linear and space- invariant transformation of the original image. Anisotropic diffusion is a generalization of this diffusion process: it produces a family of parameterized images, but each resulting image is a combination between the original image and a filter that depends on the local content of the original image. As a consequence, anisotropic diffusion is a non-linear and space- variant transformation of the original image. Formally, let Ω ⊂ ℝ2 denote a subset of the plane and 𝐼(. , 𝑡): Ω → ℝ be a family of gray scale images, then the anisotropic diffusion is defined as follows: 𝜕𝐼 𝜕𝑡 = 𝑑𝑖𝑣(𝑐(𝑥, 𝑦, 𝑡)𝛻𝐼) = 𝛻𝑐𝛻𝐼 + 𝑐(𝑥, 𝑦, 𝑡)𝛥𝐼, (2) where Δ denotes the Laplacian, ∇ denotes the gradient, 𝑑𝑖𝑣(… ) is the divergence operator and c(x,y,t) is the diffusion coefficient. c(x,y,t) controls the rate of diffusion and is usually chosen as a function of the image gradient so as to preserve edges in the image. As a diffusion coefficient we used the function: 𝑐(‖𝛻𝐼‖) = 𝑒 −( ‖𝛻𝐼‖ 𝐾 )2 , where the constant K controls the sensitivity to edges and its value is chosen experimentally equal to 20. Finally, to adjust the contrast of the images, simple histogram equalization is used. Fig.3 shows the extracted ROI image after median, wiener and anisotropic diffusion filter and after image enhancement with histogram equalization. It can be seen that the quality of the image has been improved significantly. (a) (b) Fig. 3. Illustration of image enhancement: (a) original ROI images, (b) correspondingly enhanced ROI images of (a). http://en.wikipedia.org/wiki/Anisotropic http://en.wikipedia.org/wiki/Scale_space http://en.wikipedia.org/wiki/Scale_space http://en.wikipedia.org/wiki/Diffusion_process http://en.wikipedia.org/wiki/Convolution http://en.wikipedia.org/wiki/Isotropic http://en.wikipedia.org/wiki/Gaussian_filter http://en.wikipedia.org/wiki/Laplacian http://en.wikipedia.org/wiki/Gradient http://en.wikipedia.org/wiki/Divergence http://en.wikipedia.org/wiki/Anisotropic S. Chidemyan, A. Jivanyan and G. Khachatryan 89 2.3 Segmentation of Vein Pattern A. Multiscale vessel enhancement By observing our database we found that multiscale vessel enhancement scheme proposed in [1] can be used to segment a vein pattern. Thus, in this approach a vessel enhancement is conceived as a filtering process that searches for geometrical structures which can be regarded as tubular. Since vessels appear in different sizes it is important to introduce a measurement scale which varies within a certain range. To analyze the local behavior of an image L the Taylor expansion in the neighborhood of a point 𝑥0 is considered. 𝐿 (𝑥0 + ∆𝑥0 , 𝑠) ≈ 𝐿(𝑥0 , 𝑠) + ∆𝑥0 𝑇 ∇𝑜,𝑠 + ∆𝑥0 𝑇 𝐻𝑜,𝑠∆𝑥0. (3) This expansion approximates the structure of the image up to second order. ∇o,s and Ho,s are the gradient vector and Hessian matrix of the image computed in xo at scale s. The differentiation here is defined as a convolution with the derivatives of the Gaussian functions: 𝜕 𝜕𝑥 𝐿(𝑥, 𝑠) = 𝑠𝛾 𝐿(𝑥) ∗ 𝜕 𝜕𝑥 𝐺(𝑥, 𝑠), (4) where the D-dimensional Gaussian function at scales s is defined as follows: 𝐺(𝑥, 𝑠) = 1 √2𝜋𝑠2 𝐷 𝑒 −‖𝑥2‖ 2𝑠2 , (5) and * is the convolution operator. The parameter 𝛾 was introduced by Lindeberg [2] to define a family of normalized derivatives. For our purpose it is set equal to 1. The idea behind the eigenvalue analysis of the Hessian is to extract the principal directions in which the local second order structure of the image can be decomposed. Since this directly gives the direction of smallest curvature (along the vessel) the application of several filters in multiple orientations is avoided. Let 𝜆𝑠,𝑘denote the eigenvalue corresponding to the k-th normalized eigenvector 𝑢𝑠,𝑘of the Hessian 𝐻𝑜,𝑠, all computed at scale s. From the definition of eigenvalues: 𝐻𝑜,𝑠𝑢𝑠,𝑘 = 𝜆𝑠,𝑘 𝑢𝑠,𝑘 , (6) and it follows that: 𝑢𝑠,𝑘 𝑇 𝐻𝑜.𝑠𝑢𝑠,𝑘 = 𝜆𝑠,𝑘. (7) Let 𝜆𝑘denote the eigenvalue with the k-th smallest magnitude (|𝜆1| ≤ |𝜆2| ≤ ⋯). In particular, a pixel belonging to a vessel region will be signaled by 𝜆1being small (ideally zero): 𝜆1 ≈ 0 and|𝜆1| ≪ |𝜆2|. Two local characteristics of image can be measured by analyzing the above two equations. First, the norm of the eigenvalues will be small at the location where no structure information is shown since the contrast difference is low, and it will become larger when the region occupies a higher contrast since at least one of the eigenvalues will be large. Second, the ratio between |𝜆1| 𝑎𝑛𝑑 |𝜆2| will be large when the blob-like structure appears in the local area, and will be very close to zero when the structure shown is line-like. Mathematically, the two measures are represented as follows: Palm Vein Minutiae Feature Extraction for Human Identification 90 𝐿𝑠 = √∑ 𝜆𝑖 2,𝑖≤𝐷 (8) 𝐿𝑙 = |𝜆1| |𝜆2| , (9) where 𝐿𝑠 and 𝐿𝑙 are measurement scales mentioned above and D is the dimension of the image. For 2D images the proposed local vesselness measure of position 𝑥0 in scale s will have the following form: 𝑣𝑜.𝑠 = { 0, 𝑖𝑓𝜆2 > 0, 𝑒 − 𝐿𝑙 2 2𝛼2 (1 − 𝑒 − 𝐿𝑠 2 2𝛽2 ) . (10) The vesselness measure in Equation (10) is analyzed at different scales s. The response of the line filter will be maximum at a scale that approximately matches the size of the vessel to detect. The vesselness measure provided by the filter response at different scales is integrated to obtain a final estimate of vesselness: 𝜈0 (𝜁) = max𝑠𝑚𝑖𝑛<𝑠<𝑠𝑚𝑎𝑥 𝜈0(𝑠, 𝜁), (11) where 𝑠𝑚𝑖𝑛 and 𝑠𝑚𝑎𝑥 are the maximum and minimum scales at which relevant structures are expected to be found. They can be chosen so that they will cover the range of vessel widths. Experimentally we find out that for our purpose it is enough to take 𝛼 equal to 0.5 and 𝛽 equal to 8. Fig. 4 shows enhanced ROI images and corresponded images after multiscale vessel enhancement. (a) (b) Fig. 4. Illustration of image multiscale vessel enhancement: (a) enhanced ROI images, (b) correspondingly ROI images after multiscale vessel filter. B. Local thresholding with global reduction As it can be seen from Fig. 4, the quality of images improves after the noise reduction and enhancement, however there is still many faint white regions near the vein pattern. To obtain a S. Chidemyan, A. Jivanyan and G. Khachatryan 91 better representation of a vein pattern, it is necessary to separate the vein pattern from the background. Since the global thresholding techniques alone do not provide satisfactory results local thresholding with global reduction (LTGR) is used. LTGR is an adaptive algorithm that combines both local and global thresholding. The algorithm chooses different threshold values for every pixel in the image based on the values of its surrounding neighbors. Mathematically it can be expressed as follows: 𝐵𝑖𝑛𝑎𝑟𝑖𝑧𝑒𝑑𝐼𝑚𝑎𝑔𝑒(𝑖, 𝑗) = { 1, 𝑖𝑓𝐼𝑚𝑎𝑔𝑒(𝑖, 𝑗) ≥ 𝜇𝑖,𝑗 − 𝑇, 0, 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒, (12) where 𝜇𝑖,𝑗 is the mean value for (𝑖, 𝑗) pixel’s 𝑁 × 𝑁 neighborhood and T is a common offset for all the pixels. Parameters N and T are found experimentally equal to 10 and 6, correspondingly. Fig. 5 illustrates the final extracted vein pattern after LTGR. It can be noticed that the true vein structures are quite well preserved in the resulting binarized image. (a) (b) Fig. 5. Illustration of vein pattern extraction after LTGR algorithm: (a) ROI images after multiscale vessel enhancement, (b) ROI images after LTGR algorithm is used. 3. Minutiae Feature Extraction 3.1 Skeletonization To obtain minutiae features from binarized vein pattern the skeleton representation of it firstly should be obtained. Any of skeletonization algorithms can be used for this purpose. We use a thinning algorithm proposed in [3]. Fig. 6 shows the skeleton of vein patterns extracted from the images from a previous step. As it can be seen from Fig. 6, vein patterns are now well preserved. Palm Vein Minutiae Feature Extraction for Human Identification 92 (a) (b) Fig. 6. ROI images after thinning algorithm. 3.2 Extraction of Minutiae Points The vein pattern can be well represented by a number of critical points referred as minutiae points. The branching points and the ending points in the vein pattern skeleton image are the two types of critical points to be extracted. Ending points here are mainly ending points of vein skeleton curves that placed at the edge of ROI and resulted from the cropping of hand image while obtaining ROI. Although these ending points are not real ending points of vein on palm, they are taken because they contain geometrical information about the shape of the skeletons of the vein pattern. As for bifurcation points, they are the junction points of three curves. Fig. 7 illustrates some of bifurcation and ending points on vein pattern’s skeleton representation. Fig. 7. Some of bifurcation points are marked by red circle To obtain the junction points the method used in [4] is taken. We run a pixel-wise operation for 3 × 3 region: P1 P2 P3 P8 P0 P4 P7 P6 P5 S. Chidemyan, A. Jivanyan and G. Khachatryan 93 and define the number of transition between 0 and 1 (and vice versa) from 𝑃1 to 𝑃8 as follows: 𝑁𝑡𝑟𝑎𝑛𝑠 = ∑ |𝑃𝑖+1 − 𝑃𝑖 | 8 𝑖=1 , where 𝑃1 = 𝑃9. Then 𝑃0 𝑖𝑠 { 𝑒𝑛𝑑𝑖𝑛𝑔 𝑝𝑜𝑖𝑛𝑡, 𝑖𝑓𝑃0 = 1 𝑎𝑛𝑑 𝑁𝑡𝑟𝑎𝑛𝑠 = 2, 𝑏𝑟𝑎𝑛𝑐ℎ 𝑝𝑜𝑖𝑛𝑡, 𝑖𝑓𝑃0 = 1 𝑎𝑛𝑑 𝑁𝑡𝑟𝑎𝑛𝑠 ≥ 6. Fig. 8 illustrates the extracted bifurcation and ending points after merging these two sets of points for one of the extracted palm vein pattern’s skeleton.These minutiae points are widely used as a feature to match pair of palm veins, so to identify a person, too. Fig. 8. Union of sets of bifurcation and ending points for some vein pattern’s skeleton. 4. Experimental Results Different techniques can be used to compare the similarity of extracted minutiae points for two different persons. For our purpose we used the modified Hausdorff distance, proposed in [5]. For two point sets A and B, the modified Hausdorff distance (MHD) can be defined as follows: ℎ(𝐴, 𝐵) = 1 𝑁 ∑ 𝑚𝑖𝑛𝑏𝑗𝜖𝐵𝑎𝑖∈𝐴 ∥ 𝑎𝑖 − 𝑏𝑗 ∥, (13) where N is the number of points in A. Fig. 9. Error rate curves for minutiae recognition using MHD. Palm Vein Minutiae Feature Extraction for Human Identification 94 Fig. 9 shows False Acceptance Rate and False Rejection Rate curves from which it can be seen MHD algorithm achieves approximately 1% equal error rate (EER), when the threshold value was set equal to 11, which is quite good for this kind of systems. 5. Conclusion In this paper we have attempted to present the approach and some experimental results to show how problems that arise during hand palm segmentation, ROI image enhancement, palm vein extraction and minutiae feature extraction can be solved efficiently and quickly. We proposed different techniques of image enhancement including some morphological techniques. After each step of preprocessing, as it can be seen, the images are enhanced significantly. Experiments on CASIA database show that we can extract on average 25 minutiae points from each vein pattern, including 10 bifurcation points and 15 ending points on average for each vein pattern. This quantity of minutiae point is quite enough for the purpose of human identification. Since the database is big enough (6 hand image for each of 100 persons) it can be argued according to our experiments that these minutiae features are discriminating features in the hand vein images. References [1] A. F. Frangi, W. J. Niessen, K. L. Vincken and M. A. Viergever,“Multiscale vessel enhancement filtering”, MICCAI, Springer, LNCS 1496, pp. 130-137, 1998. [2] T. Lindeberg,“Edge detection and ridge detection with automatic scale selection”, Proc. Conf. on Comp. Vis. and Pat.Recog.,San Francisco, CA, June, pp. 465–470, 1996. [3] Z. Guo and R.W. Hall, “Fast fully parallel thinningalgorithms”,Computer Vision, Graphics and Image Processing, vol. 55, no.3, pp.317-328, 1992. [4] L.Y. Wanga, G. Leedhamb, and D. S. Y. Choa, “Minutiae feature analysis for infrared hand vein pattern biometrics”, Pattern Recognition Society, Published by Elsevier Ltd, All rights reserved, 2007. [5] M.P. Dubuisson and A.K. Jain, “A modified Hausdorff distance for object matching”, Proceedings of the 12 th International Conference on Pattern Recognition, pp. 566-568, Jerusalem, Israel, 1994. [6] H.Soliman, A. Saber Mohamed and Ahmed Atwan, “Feature level fusion of palm veins and signature biometrics”, International Journal of Video & Image Processing and Network Security IJVIPNS-IJENS,vol. 12, no. 01 28, pp.28--39, 2012. [7] Y. Zhou and A. Kumar, “Human identification using palm-vein images”,IEEE Transactions on Information Forensics and Security, vol. 6, no. 4, pp. 1259--1274, 2011. [8] GohKahOng Michael, T. Connie and A.BengJinTeoh,”Touch-less palm print biometrics: Novel design and implementation”, Image and Vision Computing,vol. 26,pp. 1551– 1560, 2008. S. Chidemyan, A. Jivanyan and G. Khachatryan 95 [9] M. H.-M. Khan, R. K. Subramanian and N. A. Mamode Khan, “Low dimensional representation of dorsal hand vein features using principle component analysis (PCA)”, World Academy of Science, Engineering and Technology 49,vol.3, pp. 848--854, 2009. [10] L. Mirmohamadsadeghi and A. Drygajlo, “Palm vein recognition with local binary patterns and local derivative patterns”, Proceedings of the 2011 International Joint Conference on Biometrics, October 11-13, p.1--6, 2011. [11] A. Kumar and K. V. Prathyusha, “Personal authentication using hand vein triangulation and knuckle shape”, IEEE Trans. Image Process., vol. 38, pp. 2127--2136, Sep. 2009. [12] Yi-Bo Zhang, Qin Li and J. You and P. Bhattacharya, “Palmveinextraction and matching for personal authentication”, G. Qiu et al. (Eds.): VISUAL 2007, LNCS 4781, pp. 154– 164, 2007. [13] D. Zhang, W.-K. Kong, J. You and M. Wong, “Online palmprintidentification”, IEEE Transactions On Pattern Analysis And Machine Intelligence, vol. 25, no. 9, pp. 1041- 1050, September, 2003. [14] A.Shrotri, S.C.Rethrekar, M.H.Patil, D. Bhattacharyya and Tai-hoon Kim, ”Infrared imaging of hand vein patterns for biometric purposes”, Journal of Security Engineering, pp.57-66, 2009. [15] M. Deepamalar and M. Madheswaran, “An enhanced palm vein recognition system using multi-level fusion of multimodal features and adaptive resonance theory”, International Journal of Computer Applications (0975 – 8887),vol. 1,no. 20, pp. 95-101, 2010. [16] CASIA MS Palmprint V1 Database, [Online]. Available: http://biometrics.idealtest.org/dbDetailForUser.do?id=5. Submitted 05.08.2014, accepted 28.11.2014. Ձեռքի ափի երակներից բնութագրերի առանձնացում մարդկանց նույնականացման համար Ս. Չիդեմյան, Ա. Ջիվանյան և Գ. Խաչատրյան Ամփոփում Այս հոդվածում ներկայացված է ձեռքի ափի երակների պատկերներից բնութագրերի առանձնացման մեթոդը, որը կարող է օգտակար լինել բիոմետրիկ խնդիրների լուծման ժամանակ: Այստեղ նաև ցույց է տրվել, թե ինչպես ափի երակներից բնութագրերի առանձնացումը կարող է իրականացվել հստակ և արդյունավետ` մեր առաջարկած եղանակով, մասնավորապես՝ ինչպես հնարավոր աղավաղումների և նկարի պտտելու հետ կապված խնդիրները լուծելի են մեր առաջարկած եղանակով: Որպես բնութագիր ընտրվել են ճյուղավորման և եզրային կետերը: Պատկերների http://biometrics.idealtest.org/dbDetailForUser.do?id=5 Palm Vein Minutiae Feature Extraction for Human Identification 96 բազայի վերլուծությունը ցույց տվեց, որ յուրաքանչյուր ափի երակների պատկերից կարելի է առանձնացնել մոտավորապես 25 բնութագիր կետեր: Փորձերի արդյունքում եկանք այն եզրահանգմանը, որ յուրաքանչյուր երակների ցանցի համար բնութագիր կետերի քանակը բավական է անձի նույնականացման խնդիրը լուծելու համար: Извлечение минуций вен ладони для распознавания людей С. Чидемян, А. Дживанян и Г. Хачатрян. Аннотация В этой статье представлен способ извлечения минуций из изображения вен ладони, которые могут быть полезными при решении биометрических задач. Также было показано как выделение характеристик из вен ладони может быть произведено эффективно и точно при помощи нашего метода, в частности, как проблемы, связанные с потенциальными искажениями, а также с вращениями изображения, могут быть решены описанным способом. В качестве характеристик, извлекаемых из вен ладони, были выбраны точки ветвления и краевые точки. Анализ базы изображений показал, что каждое изображение вен ладони содержит около 25 минуций. По результатам экспериментов можно заключить, что количество минуций для каждой сетки вен достаточно для решения задач персональной идентификации.