Microsoft Word - 69_Widodo_Budiharto-online training- Ok.docx Online Training for Face Recognition System using Improved PCA (Widodo Budiharto) 1303 ONLINE TRAINING FOR FACE RECOGNITION SYSTEM USING IMPROVED PCA Widodo Budiharto Informatics Engineering Dept., School of Computer Science, BINUS University Jl. K.H. Syahdan No. 9, Palmerah, Jakarta Barat 11480 widodo@widodo.com ABSTRACT The variation in illumination is one of the main challenging problem for face recognition. It has been proven that in face recognition, differences caused by illumination variations are more significant than differences between individuals. Recognizing face reliably across changes in pose and illumination using PCA has proved to be a much harder problem because eigenfaces method comparing the intensity of the pixel. To solve this problem, this research proposes an online face recognition system using improved PCA for a service robot in indoor environment based on stereo vision. Tested images are improved by generating random values for varying the intensity of face images. A program for online training is also developed where the tested images are captured real-time from camera. Varying illumination in tested images will increase the accuracy using ITS face database which its accuracy is 95.5 %, higher than ATT face database’s as 95.4% and Indian face database’s as 72%. The results from this experiment are still evaluated to be improved in the future. Keywords: face recognition, illumination, improved PCA, service robot, ITS face database ABSTRAK Variasi iluminasi merupakan salah satu tantangan utama dalam pengenalan wajah. Telah terbukti perbedaan karena variasi iluminasi lebih penting dibandingkan perbedaan pada tiap wajah individu. Pengenalan wajah terhadap perubahan pose dan iluminasi menggunakan PCA terbukti lebih susah karena methode eigenspace yang digunakan membandingkan intensitas dari tiap piksel. Untuk mengatasi masalah ini, penelitian ini mengusulkan sistem pengenalan wajah online menggunakan PCA yang diperbaharui untuk digunakan pada service robot pada lingkungan indoor berbasiskan stereo vision. Improvisasi dilakukan dimana gambar yang ada diberikan nilai random untuk intensitasnya. Kemudian dibangun program pelatihan online dimana gambar yang diujicobakan diperoleh secara real-time dari kamera. Memvariasikan iluminasi pada image pelatihan akan meningkatkan akurasi dengan metode ITS face database yang memiliki akurasi 95.5%, lebih tinggi dibandingkan ATT face database sebesar 95.4% dan Indian face database 72%. Hasil eksperimen ini masih dievaluasi untuk perbaikan di masa mendatang. Kata kunci: pengenalan wajah, iluminasi, PCA terbaharui, service robot, ITS face database 1304 ComTech Vol.2 No. 2 Desember 2011: 1303-1310 INTRODUCTION Ability for face recognition and real interaction with a user is one important issue for developing vision-based service robots. Since face tracking and face recognition are essential functions for a service robot, many researchers developes face-tracking mechanism for the robot (Yang M., 2002) and face recognition system for service robot ( Budiharto, W., 2010). The objective of this chapter is to propose an online training for face recognition system using improved principal component analysis (PCA) which is implemented to a service robot in a dynamic environment using stereo vision. Variation in illumination is a challenging problem for face recognition. It has been proved that differences caused by illumination variations are more significant than ones between individuals (Adini et al., 1997). Recognizing face reliably across changes in pose and illumination using PCA is proved to be a much harder problem because of eigenfaces method compared to the intensity of the pixel. To solve this problem, we have improved the training images by generating random values for varying the intensity of face images. We have proposed an online face recognition system using PCA. This model is very important because it can be implemented to service robots so that they are able to automatically learn and recognize the customers. Several experiments using three poses images (front, left and right) of each person and given training images with varying illumination improves the success rate for recognition. Our proposed method is successfully implemented to a service robot called Srikandi III in our laboratory. Literature Study Improved Face Recognition Using PCA Face is our primary focus of attention in developing a vision-based service robot. Unfortunatelly, developing a computational model of face recognition is quite difficult, because faces are complex and multidimensional. Modelling of face images can be based on a statistical model like principal component analysis (PCA) (Turk and Pentland, 1991 ) and linear discriminant analysis (LDA) (Etemad & Chellappa, 1997; Belhumeur et.al, 1997), as well as on a physical model on the assumption of certain surface reflectance properties, such as Lambertian surface (Zoue et al., 2007). Linear discriminant analysis (LDA) is a method for finding such a linear combination of variables which best separates two or more classes. Constrasting their PCA which encoded information in an orthogonal linear space, LDA which is also known as fischerfaces method encodes discriminatory information in a linear separable space of which bases are not necessary orthogonal. However, the LDA result is mostly used as a part of a linear classifier (Zhao et al., 1998). PCA is a standard statistical method for feature extraction by reducing the dimension of input data by a linear projection that maximizes the scatter of all project samples. The scheme is based on an information theory approach that decomposes face images into a small set of characteristic feature images called “eigenfaces”, a principal component of the initial training set of face images. Recognition is performed by projecting a new image into the subspace spanned by the eigenfaces called “face space”, and then classifying the face by comparing its position in face space with the positions of known individuals. PCA based approaches typically include two phases: training and classification. In the training phase, an eignespace is established from the training samples using PCA and the training face images are mapped to the eigenspace for classification. In the classification phase, an input face is projected to the same eignespace and classified by an appropriate classifier (Turk & Pentland, 1991 ). Let a face image I (x,y) be a two-dimensional N by N array of (8-bit) Online T intensity image o space). below: W can dete class k b and thres T to 92x1 used as adjustme images, this adju local co spreadin Figure 1 P challeng illumina In Belhu that the arbitrary “illumin different Visual C W value be shown at Training for Fa y values. An f size 256 b If Φ is face Where ui a ermine which by using the shold θ using The stereo c 12pixels usi training ima ent using the especially w ustment, the i ontrast to ga ng out the mo 1. Image prepr Practical fac ging problem ation modelin umeur and K set of n-pix y number of ation cone” t illuminatio C++ technica Where of is efore brightn t histrogram ace Recogn n image ma by 256 beco e images and iu and vik are th h face class p euclidian di g the formula camera used ing region of ages for fac e image's his when the usab intensity can ain a higher ost frequent i (a rocessing from o ce recognition m. Various m ng, illuminat Kriegman’s (1 xel images o f point light (Belhumeur n conditions al Pack by thi s the intensit ess operation below (Figu nition System ay also be co omes a vecto d M is trainin ∑ = Φ= M k kikv 1 he ith eigens provides the istance εk be a below: in this resear f interest me e recognitio stogram. Th ble data of th n be better di r contrast. H intensity valu a) original image n system nee methods have tion invarian 1998), an illu of a convex sources at r & Kriegm s by generat is formula: ty value after n apllied and ure 2): m using Impro onsidered as or of dimens ng set face im space and th e best descrip etween the ne rch is 640x4 ethod (ROI) n system. W his method u he image is r istributed on Histogram eq ues. (b) (a), to greyscal eds to deal w e been propo nt feature ext umination co object with infinity, for man 1998). I ting random r the brightn d is the brig oved PCA (W a vector of sion 65,536 mages, we c he kth value o ption of an in ew face proj 480 pixels. T as shown in We use hist usually incre represented b the histogra qualization a (c ed image (b) a with the illum osed to solve traction and one is propo a Lambertia rm a convex In this resea values for b ness operation ghtness level Widodo Budih f dimension ( a point in an compute of the ith eig nput face im jection Ω, th The size of f n Figure 1. T togram equa eases the glo by close con am. This allo accomplishe c) and made to hist mination varia e the proble preprocessin sed for the f an reflectanc x polyhedral arch, we con brightness le n applied an l. The effect harto) N2, so that n 65,536-dim the eigenspa genvector. T mages to find he class proje face image is These images lization for obal contrast ntrast values. ows for areas s this by ef togram equaliza ation which m, such as: ng and norm first time. It i ce function, l cone in IR nstruct imag evel develop nd 1f is the of brightnes 1305 a typical mensional ace iu as (1) Then, we d the face ection Ωk (2) s cropped s are also r contrast of many Through of lower ffectively ation (c). is a main face and alization. is proved under an Rn called ges under ped using (3) intensity ss level is 1306 ComTech Vol.2 No. 2 Desember 2011: 1303-1310 Figure 2. Effects of varying the illumination for a face. For storing user’s training faces, we propose a simple face table database (face_id, name, date_registered, image_file1, image_file2, image_file3) as shown in Figure 3: Figure 3. Proposed face databases using 1 table, 3 images for each person (front, left and right side). METHOD We have developed a method for online training for face recognition system. The training images with extention .pgm is stored in a directory. In online mode, the program will store the face images to files. In training mode, the training images will be executed for training. In testing mode, the input image from camera will be compared with the training images. The algorithm is shown below: Algorithm 1. Online training for Face recognition system using imporved PCA. Begin Call onlineTraining If customer_identified==true then Display name and confident Else Online Training for Face Recognition System using Improved PCA (Widodo Budiharto) 1307 No action End if end Function onlineTraining Call faceDetection if (menu ==save) Save name Save training images End if If (menu=finish) Training the training images Exit End if End function Function faceDetection Save input image Face detection using Haar Classifier End function function faceRecognition do PCA \ if( (face_recognition==true) then customer_identified=true set customer_name else customer_identified=false; // no action for the robot if customer not identified endif end function We have developed a vision-based service robot called Srikandi III with the ability to perform face recognition and avoid people as moving obstacles. This wheeled robot is the next generation of Srikandi II (Budiharto, 2010). The prototype of service Robot Srikandi III utilizing a low cost stereo Minoru 3D camera is shown in Figure 4: Figure 4. Prototype of service robot Srikandi III using stereo vision. 1308 ComTech Vol.2 No. 2 Desember 2011: 1303-1310 RESULTS AND DISCUSSION We have identified the effect of varying illumination to the recognition accuracy for our database called ITS face database as shown in Table 1. Results shows that by giving enough training images with variation of illumination generated randomly, the success rate of face recognition will be improved. If the recoqnition test does not apply illumination varying, and enough training images are available, PCA can recognize people’s faces with success rate up to 100%. But if varying illumination is applied and the training images are not sufficient (for example six tested images with only six training images) the success rate only 50%. However, if the number of training images are increased, the success rate can reach 100%. Based on Table 1, too many tested images with varied illumination will decrease the success rate from 100% to 91.60% because the training images are not adequate to recognize the variation of illumination from tested images. Therefore, the best way to increase the succes rate with many tested images with illumination variation is have adequate number of training images. Table 1 Tested images without and with Illumination Variation Training images Tested images Success rate No varying illumination 6 6 100% 12 6 100% 24 6 100% Varying Illumination 6 6 50.00% 12 6 66.00% 24 6 100% 24 10 91.60% We also evaluate the result of our proposed face recognition system and compared with ATT and Indian face database using Face Recognition Evaluator developed by Matlab. Each face database consists of ten sets of people’s face. Each set of ITS face database consists of 3 poses (front, left, right side) and varied with illumination. ATT face database consists of nine differential facial expression and small occlusion (by glass) without variation of illumination . The Indian face database consists of eleven pose orientations without variation of illumination and the size of each image is too small than one of ITS and ATT face database. The success rate comparison among the three face databases is shown in Figure 5. From the figure it is clearly noticed that ITS database have highest accuracy than ATT and Indian face database when the illumination of the tested images is varied. The accuracy using PCA in ITS face database as much as 95.5 %, higher than ATT face database as 95.4% and IFD face database as 72%. Figure 5. Face recognition accuracy of the 3 face databases (each uses 10 face sets). Online T F shortest Figure 7 and ATT Indian fa Figure T in Figure Figu T system i it is pro recogniti accuracy Training for Fa For total exe because the 7 below show T have the sa ace database e 6. Total exec The online tr e 8. This info ure 8. Result o This article p s implement oved that va ion. The acc y, a bit highe ace Recogn ecution time size of each ws the duratio ame size for . cution time for IFD. raining for f ormation is s of our online presents an o ted to a servi arying illum curacy test u er than ATT nition System (Figure 6), h image is l on of training each image, r ITS, ATT an face recognit stored in .xml training for fa customer CO online trainin ice robot in d mination in uses our prop T face databa m using Impro it is noticed lowest then o g time used f , they have t nd Fi tion system s l file. face recognitio r and other inf ONCLUSI ng for face r dynamic envi training ima posed metho ase does as 9 oved PCA (W that the Ind others stored for ITS and A he same valu igure 7. Train succesfully i on system. Th formation. ON recognition s ironment usi ages will in od ITS face d 95.4% and In Widodo Budih dian face data d in ITS and ATT faces d ues for traini ning time for I dentified use e program suc system using ing stereo vis ncrease the database wh ndian face da harto) abase (IFD) d ATT face database. Bec ing time com TS, ATT and er’s face as d ccesfully iden improved P sion. From th success rate ich generate atabase does 1309 takes the database. cause ITS mpared to d IFD. displayed ntified CA. This he results e in face es 95.5 % s as 72%. 1310 ComTech Vol.2 No. 2 Desember 2011: 1303-1310 An adequate number of tested images with illumination variation is required to make sure the process of recognition is accurate. The simple face database system proposed can be used for the vision- based service robot. The experiment results with various situations show that the proposed methods and algorithms work well together. We hope this system can be improved and implemented for any vision-based service robot. In future work, we will implement this system and develop a vision-based humanoid service robot for serving customers at Cafe/Restaurants. REFERENCES Adini, Y.; Moses, Y. & Ulman, S. (1997). Face Recognition: The Problem Of Compensating for Changes in Illumination Direction. IEEE Transactions Pattern Analysis and Machine Intelligence,19 (7), 721-732. Belhumeur, P., Kriegman, D. (1998). What is the set of images of an object under all possible illumination conditions. International Journal of Computer Vision, 28 (3), 245-260. Etemad, K., Chellappa R (1997). Discriminant Analysis for Recognition of Human Face Images. Journal of the Optical Society of America, 14 (8), 1724-1733. Turk M. and Pentland A. (1991). Face Recognition Using Eigenfaces. Proceeding of IEEE Conference on Computer Vision and Pattern Recognition, 586-591. Yang, M. (2000). Detecting Faces Images: A Survey. IEEE Transactions on Pattern Analysis and Machine Inteligence, 24 (1), 34-58.