Microsoft Word - BRAIN_vol9_issue1_2018_v7_final1.doc 5 Emotion Extraction from Facial Expressions by Using Artificial Intelligence Techniques Hakan Boz Usak University, Usak, Turkey İzmir Yolu 8. Km. Atatürk Bulvarı, 64000 Merkez/Uşak, Turkey Tel. +90 276 221 21 21 hakan.boz@usak.edu.tr Utku Kose Suleyman Demirel University, Isparta, Turkey Çünür Mahallesi, Süleyman Demirel Caddesi, Merkez/Isparta, Turcia Tel.: +90 246 211 10 00 utkukose@sdu.edu.tr Abstract Nowadays, there is no application area in which Artificial Intelligence oriented solutions are not employed. It is possible to see its use in even daily life and the solution scope of this scientific field of the future is growing day by day. Because of its great advantages in analyzing the physical world and solving real world problems, Artificial Intelligence techniques are often employed in different research problems that cannot be solved with traditional computational approaches. In this context, using intelligent systems to understand human features is one of the most popular research interest for recent years. In the sense of the explanations so far, objective of this study is to introduce a research in which a system that is able to extract emotions from individuals’ facial expressions was designed and developed. In detail, the system considered here a Cascade Feed- forward Artificial Neural Network model trained by a recent optimization algorithm called as Vortex Optimization Algorithm. The developed system has been applied to different sets of photos from the literature and positive results were obtained at the final for each different set considered. Keywords: facial expression recognition, artificial intelligence, artificial neural networks, vortex optimization algorithm, emotions, neuromarketing 1. Introduction Humankind currently faces lots of rapid innovations shaping a future with full of automated technologies. Although this situation is associated with also developments in electronics, intelligent behaviors of automated technologies are greatly affected by the rise of Artificial Intelligence. As being the science of designing and developing intelligent systems – machines (Flasiński, 2016; Ginsberg, 2012; Nilsson, 2014), Artificial Intelligence has a remarkable role on solving real world problems effectively and efficiently. Because of also strong relations between Artificial Intelligence and some other supportive technologies (i.e. computer technologies, communication technologies), the humankind is experiencing a life standing over analyzes, decisions, and even predictions made by intelligent systems. The associated literature of Artificial Intelligence consists of many different research interests dealing with specific problems of real life. It is possible to see that hundreds of different research topics can be extracted from the performed research studies so far. But of course, trends on objective problems solved by using Artificial Intelligence may change in time. In recent years, an important and remarkable problem of Artificial Intelligence is based on facial recognition (Amos et al., 2016; Ding, & Tao, 2015; Ding, & Tao, 2017; Karczmarek et al., 2017; Lopes et al., 2017; Parkhi et al., 2015; Sun et al., 2015; Zhang et al., 2016). Because the problem of detecting individuals’ faces and analyzing features of the detected faces to derive some ideas about individuals’ emotions, actions, general events…etc. is too important for many research objectives, there has been a remarkable effort on designing and developing alternative approaches in the BRAIN – Broad Research in Artificial Intelligence and Neuroscience, Volume 9, Issue1 (February, 2018), ISSN 2067-8957 6 literature (AbdAlmageed et al., 2016; Bashyal, & Venayagamoorthy, 2008; Guo et al., 2000; Liu et al., 2014; Ma, & Khorasani, 2004; Mao et al., 2015; Ouyang et al., 2015; Samir et al., 2006; Zhou et al., 2015). Facial recognition and also alternative recognition approaches for extracting some features from objective individuals are important for even some fields like marketing, education to understand more about behaviors (Fugate, 2007; Gates, 2011; Kapoor, & Picard, 2005; Landwehr et al., 2011; Loh et al., 2006; Shen et al., 2009). Objective of this study is to present a research process in which an intelligent system that is able to extract emotions from individuals’ facial expressions was designed and developed. In detail, the system considered here a Cascade Feed-forward Artificial Neural Network model trained by a recent optimization algorithm called as Vortex Optimization Algorithm. In the research process, it is aimed to apply the developed system to different sets of photos from the associated literature. At this point, the related model of Artificial Neural Networks having different number and type of outputs will be used for different sets and the findings obtained via this emotion extraction system will be evaluated. On the other hand, it is also aimed to set different parameters for the ‘trainer’ algorithm: Vortex Optimization Algorithm, to see if different cases for the optimization algorithm can affect objective Artificial Neural Networks model on emotion extraction. Taking the subject of the study into consideration, the remaining content of the paper is organized as follows: The next section is devoted to some essential information about the concept of emotions, measuring emotions, and also facial expressions in this manner. Following to that section, the third section explains the methodology followed in the study by giving information about the structure of the formed system, objective data sets, and the planned emotion extraction applications for these data sets. Next, the fourth section provides the findings and has a discussion according to findings and finally, the last section provides conclusions and explanations on some planned future works. 2. Background Understanding the human behavior is a complex phenomenon. A multidimensional approach is needed to understand emotions because of the complex nature of emotions. In the last thirty years there has been an explosion of papers in the economics, psychology, marketing, management and neurology literature try to identify how specific emotions such as anger, sad, happy, fear, joy, disgust, and suprised affect human decisions (Descartes, 1994; Cornelius, 1996; Elster, 1998; Fredrickson, 1998; Georgr, 2000; Loewenstein, 2000; Fox et al., 2001; Colibazzi et al., 2010; White, 2010; Koc ve Boz, 2014; Boz, Arslan & Koc, 2017). Here, it is important to briefly focus on why emotions are important so much in especially recent studies. 2.1. Emotions Emotions play a major role in consumers’ human decision making process (Fessler et al., 2004; Yuksel, 2007). Emotion is one of the most attractive fields in marketing research due to the emotions’ role on consumer behavior (Norman, 2003). It can be said that it is important for marketers to understand the consumer's feelings towards products. But it is quite difficult to measure emotions because they are both abstract and hidden. Emotions can be measured by different data collection methods such as questionnaire, interview and etc. However, it is difficult to measure emotions by using traditional data collection methods such as questionnaire or interview due to two main reason. The first reason is that emotions can be affected by subliminal processes by hidden motives. Therefore, consumers cannot be aware of the actual reasons behind purchasing decisions. The second reason is that consumers can avoid expressing the truth because of impression management Koc, & Boz (2014). In recent years, neuro-marketing has made significant contributions in measuring the emotions affecting consumers' purchasing decisions. At this point, there are different types of data used for measuring the emotions. In this sense, facial expressions take an important role on detecting emotions well. H. Boz, U. Kose - Extraction from Facial Expressions by Using Artificial Intelligence Techniques 7 2.2. Data for Measuring Emotions and Facial Expressions In recent years, the use of Electroencephalography (EEG), Magnetic Resonance Imaging Instrument (MRI Scanner), Functional Magnetic Resonance Imaging Instrument (fMRI Scanner), Positron Emission Tomography (PET), Electromyography (EMG), Eye Tracker, Galvanic Skin Response (HR), have begun to measure emotions in marketing research. Facial expression recognition is one of the most active methods for measuring emotions in neuromarketing studies (Garbas et al., 2013; Dieckmann, & Unfried, 2014; Hamelin et al., 2017). Facial expression recognition includes both measurement of facial motion and recognition of emotions (Tian et al., 2011). Face is one of the best source of information about customers’ actual emotions (Wyrembelski, 2014). Because facial movements and gestures are the basis of communication between people and it is stated that it can be combine 10000 facial expressions via 43 face muscles (Bejgu, & Mocanu, 2014). Facial expressions convey multitude information during the communication process about the attention, emotion and attitude of the people (Argyle, & Cook, 1976; Ekman, & Rosenberg, 1997; Kendon, 2000). Mehrabian (1968) stated that facial expressions constitute 55% of a communicated message’s efficiency. 3. Methodology This section is devoted to the general methodology used for extracting emotion from facial expressions. As it was expressed before, the approach introduced here is done thanks to Artificial Intelligence and in order to achieve that, the authors have benefited from some data collected. So, it is important to explain details regarding to these aspects respectively. 3.1. Data Collection When we consider the associated literature, it is possible to see lots of different data sets created for facial recognition. Although these data sets can include different resolutions of photos with changing features of viewed faces, main objective is generally finding the points determining different parts of a face. After determining the related parts of a face, it becomes easier to have idea about i.e. facial expression. On the other hand, it is also important that different genders, different races and even changing sizes of a head – face can affect the way of detecting facial expression. So, it has been always important for especially researchers working on detecting facial expressions by intelligent systems to use appropriate types of data. Considering the recent literature, this study has employed some different data sets to realize the application of emotion extraction from facial expressions (Figure 1). The related data sets are:  Set 1: A data set including 490 photos with genders, and adjusted for 7 emotions (Grgic, & Delac, 2017; Gross, 2005).  Set 2: Chicago data set including 810 photos with genders, races, and adjusted for 7 emotions (Ma et al., 2015),  Set 3: 100 photos chosen randomly by the authors, and adjusted for 7 emotions. In order to train the formed system with an accurate training data, it is important to use appropriate points from face photos. Here, the chosen points should figure out facial emotions effective enough so that a trained Artificial Intelligence based system will extract emotions well- enough from newly encountered points. In order to achieve that, Microsoft Kinect 3D infrastructure has been used to gather 3D face points (Figure 2, Microsoft, 2018). Microsoft Kinect 3D is a popular system of both hardware and software solutions for effective recognition processes from photos or videos (Cao et al., 2014; Li et al., 2013; Sato et al., 2017; Silverstein, & Snyder, 2017; Zhang, 2012). BRAIN – Broad Research in Artificial Intelligence and Neuroscience, Volume 9, Issue1 (February, 2018), ISSN 2067-8957 8 Figure 1. Different data sets for the application of emotion extraction from facial expressions. As it was indicated before, Set – 1 comes with also gender features, whereas Set – 2 includes information for both genders and races. At this point, input side of the training data was formed with rows including a total of 142 values with x and y coordinate values for each of 70 face points, one gender value (as numerical), and also one race value (as numerical). Gender and race value of the photos including one or both of them was defined as ‘-1’ in the training data. As the output side of the training data, the following emotions were defined for each row:  Fearful (value=1),  Angry (value=2),  Disgusted (value=3),  Surprised (value=4),  Happy (value=5),  Sad (value=6),  Neutral (value=7) Figure 2. Microsoft Kinect 3D for visual recognition processes (Microsoft, 2018). By combining all three data sets, a general data set with 1400 rows was formed. After the related training – test data set was formed with the related data sets, emotion extraction has been done with the hybrid system explained briefly under the following sub-section. H. Boz, U. Kose - Extraction from Facial Expressions by Using Artificial Intelligence Techniques 9 3.2. Extraction System Formed via Artificial Intelligence Techniques In order to extract emotions from facial expressions, a Cascade Feed-forward Artificial Neural Network was employed. As one of widely-used Artificial Neural Networks models, this one uses connections from the input layer and every previously-one layer(s) to following layer(s) (Bozkurt et al., 2014; Goyal, & Goyal, 2011; Savaci, 2006). In this way, the feedforward mechanism is achieved a better training phase for solving the objective problem (Bozkurt et al., 2014; Goyal, & Goyal, 2011; Savaci, 2006). As different from its traditional training style, the model considered here was trained by using a recent optimization algorithm called as Vortex Optimization Algorithm (Figure 3). Figure 3. The extraction system formed with Cascade Feed-forward Artificial Neural Network and Vortex Optimization Algorithm. Vortex Optimization Algorithm is an intelligent, Swarm Intelligence based algorithm, which was developed by Kose and Arslan by inspiring from dynamics of vortices in the nature (Kose, 2017; Kose, & Arslan, 2015). In detail, the algorithm uses some role-based and evolutionary mechanisms to achieve the optimization solution processes (Kose, 2017; Kose, & Arslan, 2015). A flow chart of the algorithm is provided in Figure 4 (Kose, & Arslan, 2015). BRAIN – Broad Research in Artificial Intelligence and Neuroscience, Volume 9, Issue1 (February, 2018), ISSN 2067-8957 10 Figure 4. A flow chart of the Vortex Optimization Algorithm (Kose, & Arslan, 2015). The formed hybrid system was employed for the emotion extraction purpose, by using the combinations of the related data sets. But before performing the exact extraction process, it is important to determine how many hidden layers and artificial neurons to be included in them will be used along the extraction application. On the other hand, it is also important to essential parameters of the Vortex Optimization Algorithm. So, the next works for the research objectives have been done under two phases of pre-tests and final emotion extraction applications. 3.3. Application Processes While training the objective Neural Network model with the chosen algorithm, 70% of the general data set was used while the remaining amount was included within emotion extraction (test – application) processes. While training the model, a total of 5000 iterations was chosen in all the performed training applications. Before determining the exact model of emotion extraction, different combinations of hidden layers and artificial neurons to be included in them was trained by using different combinations of data sets and default parameter values of the Vortex Optimization Algorithm (Table 1 and Table 2). This phase is called as pre-test and after that phase, the determined hybrid system model was used within final applications phase, in order to evaluate emotion extraction performance of the introduced Artificial Intelligence based system. Success rates achieved with the performed training applications are shown in Table 1. H. Boz, U. Kose - Extraction from Facial Expressions by Using Artificial Intelligence Techniques 11 Table 1. Success rates obtained with different combinations of training hybrid models with default optimization algorithm parameters. Hidden Layers Neurons Used Set Success Rate (%) 1 50 Set – 1 68 1 50 Set – 2 64 1 30 Set – 1 71 1 30 Set – 2 70 2 35 Set – 1 77 2 35 Set – 2 75 2 35 Set – 1 & 2 72 2 35 Set – 1 & 3 81 2 35 Set – 1 & 2 & 3 79 1 40 Set – 1 & 2 68 2 20 Set – 1 & 3 75 2 45 Set – 1 & 2 & 3 71 Table 2. Default parameters of the Vortex Optimization Algorithm. Particles Vorticity Max. / Min. Vorticity Elimation Rate 50 0,40 7 (+/-) 50 As it can be seen from Table 1, the appropriate model of the Cascade Feed-forward Neural Network can be formed by using 2 hidden layers including 35 artificial neurons each. After getting that appropriate model structure of the hidden layer(s), it was tried to improve success rate by using different combinations of parameters related to Vortex Optimization Algorithms. Obtained success rates regarding this final step of pre-tests are provided in Table 3. Table 3. Obtained success rates with different combinations of parameters regarding Vortex Optimization Algorithm used to train the chosen Cascade Feed-forward Neural Network model. Particles Vorticity Max. / Min. Vorticity Elimination Rate Success Rate (%) 10 0,25 10 (+/-) 50 56 20 0,25 5 (+/-) 30 61 30 0,75 6 (+/-) 50 62 50 0,50 7 (+/-) 70 74 50 0,90 8 (+/-) 65 76 100 0,50 7 (+/-) 60 73 100 0,50 7 (+/-) 30 78 100 0,50 7 (+/-) 50 84 As it can be seen from Table 4, the success rate of the hybrid system was improved by finding more optimum parameters for the trainer technique: Vortex Optimization Algorithm. As a result of the related pre-tests, optimum parameters represented in Table 5 have been determined for both techniques, to be used for final emotion extraction applications. Table 5. Determined optimum parameters for both techniques forming the hybrid system. Hidden Layers Neurons Particles Vorticity Max. / Min. Vorticity Elimination Rate 2 35 100 0,50 7 (+/-) 50 BRAIN – Broad Research in Artificial Intelligence and Neuroscience, Volume 9, Issue1 (February, 2018), ISSN 2067-8957 12 4. Final Application Findings and Discussion By following pre-tests to determine optimum parameters for both techniques forming the hybrid system, some final emotion extraction applications were performed by using different combinations of data sets. In this context, the author(s) have also developed a software system in order to provide a visual environment for tracking all necessary data during training and application phases for both photos or videos (future work), by using the C# programming language (Figure 5). The findings including success rates achieved with final applications are provided in Table 5. Figure 5. Software system developed for tracking all necessary data. Table 5. Findings obtained with the final emotion extraction applications. Objective Set Total Number of Test Photos Success Rate (%) Set – 1 147 84 Set – 2 243 77 Set – 3 30 87 Set – 1 & 2 390 75 Set – 1 & 3 177 81 Set – 2 & 3 273 79 Set – 1 & 2 & 3 420 73 By moving from the findings and experiences that the authors had with the performed application processes, it is possible to express the following remarkable points about the related Artificial Intelligence based system for emotion extraction:  The hybrid system considered here seems having enough capability to extract emotions from facial expressions.  Even the photos are from complicated sets, the system can still deal with the problem of determining true emotion among 7 different emotions, according to the target photo(s). H. Boz, U. Kose - Extraction from Facial Expressions by Using Artificial Intelligence Techniques 13  Pre-tests done before the whole emotion extraction operations figured out that it is important to find optimum parameters for all techniques run under a common hybrid approach.  As it is clear already, more number of particles for an intelligent optimization algorithm affect the objective optimizing problem (here it is training) in a positive way.  Combination of three different sets did not affect the extraction rate of the system too much, even sets do not match in the sense of gender and race features – inputs. 5. Conclusion and Future Work In this study, an Artificial Intelligence based system for extracting emotions from facial expressions have been introduced. As different from similar applications, the system formed here consists of a Cascade Feed-forward Artificial Neural Network model trained by a recent optimization algorithm called as Vortex Optimization Algorithm. Additionally, the research has been done by using different settings of emotion extraction system for different data sets of photos from the literature. Obtained findings from the reported applications show that the formed system is effective enough to extract different emotions from photos related to individuals with different gender and even different race. It has been also seen that mixture of the data from different sets can affect the extraction performance of an intelligent system. The obtained findings in the study have also encouraged the authors to plan more future works over the followed research process. First of all, it is planned to change settings for i.e. determined number of points over the face, photo size, or any other alternative conditions to see if extraction performance can be improved. On the other hand, it is also planned to use more alternative data sets with the same settings of the introduced approach to see if bigger amount of data can still affect the performance of the system. Finally, as it can be understood from the explanations provided under the previous sections for the software system developed for tracking data, the system will be applied for live or recorded videos. Acknowledgements This study has been supported by Usak University Scientific Research and Projects Unit with the project number 2017/HD-SOSB004. References AbdAlmageed, W., Wu, Y., Rawls, S., Harel, S., Hassner, T., Masi, I., ... & Nevatia, R. (2016). Face recognition using deep multi-pose representations. In Applications of Computer Vision (WACV), 2016 IEEE Winter Conference on (pp. 1-9). IEEE. Amos, B., Ludwiczuk, B., & Satyanarayanan, M. (2016). Openface: A general-purpose face recognition library with mobile applications. CMU School of Computer Science. Argyle, M., & Cook, M. (1976). Gaze and mutual gaze. Cambridge: Cambridge Unviersity Press. Bashyal, S., & Venayagamoorthy, G. K. (2008). Recognition of facial expressions using Gabor wavelets and learning vector quantization. Engineering Applications of Artificial Intelligence, 21(7), 1056-1064. Bejgu, A., & Mocanu, I. (2014). Facial emotion recognition using Kinect. Journal of Information Systems & Operations Management, 1. Boz, H., Arslan, A., & Koc, E. (2017). Neuromarketing aspect of tourısm pricing psychology. Tourism Management Perspectives, 23, 119-128. Bozkurt, M. R., Yurtay, N., Yilmaz, Z., & Sertkaya, C. (2014). Comparison of different methods for determining diabetes. Turkish Journal of Electrical Engineering & Computer Sciences, 22(4), 1044-1055. Cao, C., Weng, Y., Zhou, S., Tong, Y., & Zhou, K. (2014). Facewarehouse: A 3d facial expression database for visual computing. IEEE Transactions on Visualization and Computer Graphics, 20(3), 413-425. Colibazzi, T., Posner, J., Wang, Z., Gorman, D., Gerber, A., Yu, S., ... & Peterson, B. S. (2010). Neural systems subserving valence and arousal during the experience of induced emotions. Emotion, 10(3), 377. BRAIN – Broad Research in Artificial Intelligence and Neuroscience, Volume 9, Issue1 (February, 2018), ISSN 2067-8957 14 Cornelius, R. R. (1996). The science of emotion: Research and tradition in the psychology of emotions. Prentice-Hall, Inc. Damassio, A; (1994), Descartes' Error: Emotion, Reason and the Human Brain, New York: Putnam, 1994. Dieckmann, A., & Unfried, M. (2014). Writ Large on Your Face: Observing Emotions Using Automatic Facial Analysis. GfK Marketing Intelligence Review, 6(1), 52-58. Ding, C., & Tao, D. (2015). Robust face recognition via multimodal deep face representation. IEEE Transactions on Multimedia, 17(11), 2049-2058. Ding, C., & Tao, D. (2017). Trunk-branch ensemble convolutional neural networks for video-based face recognition. IEEE transactions on pattern analysis and machine intelligence. Ekman, P., & Rosenberg, E. L. (Eds.). (1997). What the face reveals: Basic and applied studies of spontaneous expression using the Facial Action Coding System (FACS). Oxford University Press, USA. Elster, J. (1998). Emotions and economic theory. Journal of economic literature, 36(1), 47-74. Fessler, D. M., Pillsworth, E. G., & Flamson, T. J. (2004). Angry men and disgusted women: An evolutionary approach to the influence of emotions on risk taking. Organizational Behavior and Human Decision Processes, 95(1), 107-123. Flasiński, M. (2016). Introduction to artificial intelligence. Springer. Fox, S., Spector, P. E., & Miles, D. (2001). Counterproductive work behavior (CWB) in response to job stressors and organizational justice: Some mediator and moderator tests for autonomy and emotions. Journal of vocational behavior, 59(3), 291-309. Fredrickson, B. L. (1998). What good are positive emotions?. Review of General Psychology, 2(3), 300. Fugate, D. L. (2007). Neuromarketing: a layman's look at neuroscience and its potential application to marketing practice. Journal of Consumer Marketing, 24(7), 385-394. Garbas, J. U., Ruf, T., Unfried, M., & Dieckmann, A. (2013, September). Towards robust real-time valence recognition from facial expressions for market research applications. In Affective Computing and Intelligent Interaction (ACII), 2013 Humaine Association Conference on (pp. 570-575). IEEE. Gates, K. A. (2011). Our Biometric Future: Facial Recognition Technology and the Culture of Surveillance. NYU Press. George, J. M. (2000). Emotions and leadership: The role of emotional intelligence. Human relations, 53(8), 1027-1055. Ginsberg, M. (2012). Essentials of artificial intelligence. Newnes. Goyal, S., & Goyal, G. K. (2011). Cascade and feedforward backpropagation artificial neural network models for prediction of sensory quality of instant coffee flavoured sterilized drink. Canadian Journal on Artificial Intelligence, Machine Learning and Pattern Recognition. 2(6), 78-82. Grgic, M., & Delac, K. (2017). Face Recognition Homepage. Online: http://www.face- rec.org/databases/ (Retrieved 23th Dec. 2017). Gross, R. (2005). Face Databases. Handbook of Face Recognition, (Eds.) Stan Z. Li and Anil K. Jain. Springer-Verlag. Guo, G., Li, S. Z., & Chan, K. (2000). Face recognition by support vector machines. In Automatic Face and Gesture Recognition, 2000. Proceedings. Fourth IEEE International Conference on (pp. 196-201). IEEE. Hamelin, N., El Moujahid, O., & Thaichon, P. (2017). Emotion and advertising effectiveness: A novel facial expression analysis approach. Journal of Retailing and Consumer Services, 36, 103-111. Kapoor, A., & Picard, R. W. (2005). Multimodal affect recognition in learning environments. In Proceedings of the 13th annual ACM international conference on Multimedia (pp. 677- 682). ACM. H. Boz, U. Kose - Extraction from Facial Expressions by Using Artificial Intelligence Techniques 15 Karczmarek, P., Kiersztyn, A., & Pedrycz, W. (2017). An evaluation of fuzzy measure for face recognition. In International Conference on Artificial Intelligence and Soft Computing (pp. 668-676). Springer, Cham. Kendon, A., (2000). Language and Gesture: Unity or Duality,” in Language and Gesture: Window into Thought and Action Cambridge:Cambridge Unviersity Press. Koc, E., & Boz, H. (2014). Psychoneurobiochemistry of tourism marketing. Tourism Management, 44, 140-148. Kose, U. (2017). PhD. Thesis. Dept. of Computer Engineering, Development of Artificial Intelligence Based Optimization Algorithms. Selcuk University, Turkey. Kose, U., & Arslan, A. (2015). On the idea of a new artificial intelligence based optimization algorithm inspired from the nature of vortex. BRAIN-Broad Research in Artificial Intelligence and Neuroscience, 5(1-4), 60-66. Landwehr, J. R., McGill, A. L., & Herrmann, A. (2011). It's got the look: The effect of friendly and aggressive “facial” expressions on product liking and sales. Journal of marketing, 75(3), 132-146. Li, B. Y., Mian, A. S., Liu, W., & Krishna, A. (2013). Using kinect for face recognition under varying poses, expressions, illumination and disguise. In Applications of Computer Vision (WACV), 2013 IEEE Workshop on (pp. 186-192). IEEE. Liu, P., Han, S., Meng, Z., & Tong, Y. (2014). Facial expression recognition via a boosted deep belief network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 1805-1812). Loewenstein, G. (2000). Emotions in economic theory and economic behavior. The American Economic Review, 90(2), 426-432. Loh, M. P., Wong, Y. P., & Wong, C. O. (2006). Facial expression recognition for e-learning systems using Gabor wavelet & neural network. In Advanced Learning Technologies, 2006. Sixth International Conference on (pp. 523-525). IEEE. Lopes, A. T., de Aguiar, E., De Souza, A. F., & Oliveira-Santos, T. (2017). Facial expression recognition with convolutional neural networks: coping with few data and the training sample order. Pattern Recognition, 61, 610-628. Ma, D. S., Correll, J., & Wittenbrink, B. (2015). The Chicago face database: A free stimulus set of faces and norming data. Behavior Research Methods, 47(4), 1122-1135. Ma, L., & Khorasani, K. (2004). Facial expression recognition using constructive feedforward neural networks. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), 34(3), 1588-1595. Mao, Q. R., Pan, X. Y., Zhan, Y. Z., & Shen, X. J. (2015). Using Kinect for real-time emotion recognition via facial expressions. Frontiers of Information Technology & Electronic Engineering, 16(4), 272-282. Mehrabian, A. (1968). Communication without words. Psychology Today. Microsoft. (2018). Kinect for Windows Sensor. MSDN Developer Network. Online: https://msdn.microsoft.com/en-us/library/hh855355.aspx (Retrieved 5th Jan. 2018). Nilsson, N. J. (2014). Principles of artificial intelligence. Morgan Kaufmann. Norman, D. A. (2003). Designing Emotions Pieter Desmet. The Design Journal, 6(2), 60-62. Ouyang, Y., Sang, N., & Huang, R. (2015). Accurate and robust facial expressions recognition by fusing multiple sparse representation based classifiers. Neurocomputing, 149, 71-78. Parkhi, O. M., Vedaldi, A., & Zisserman, A. (2015). Deep Face Recognition. In BMVC (Vol. 1, No. 3, p. 6). Samir, C., Srivastava, A., & Daoudi, M. (2006). Three-dimensional face recognition using shapes of facial curves. IEEE Transactions on Pattern Analysis and Machine Intelligence, 28(11), 1858-1863. Sato, K., Nose, T., Ito, A., Chiba, Y., Ito, A., & Shinozaki, T. (2017). A Study on 2D Photo- Realistic Facial Animation Generation Using 3D Facial Feature Points and Deep Neural Networks. In International Conference on Intelligent Information Hiding and Multimedia Signal Processing (pp. 112-118). Springer, Cham. BRAIN – Broad Research in Artificial Intelligence and Neuroscience, Volume 9, Issue1 (February, 2018), ISSN 2067-8957 16 Savaci, F. A. (2006). Artificial Intelligence and Neural Networks. Springer. Shen, L., Wang, M., & Shen, R. (2009). Affective e-learning: Using “emotional” data to improve learning in pervasive learning environment. Journal of Educational Technology & Society, 12(2), 176. Silverstein, E., & Snyder, M. (2017). Implementation of facial recognition with Microsoft Kinect v2 sensor for patient verification. Medical physics, 44(6), 2391-2399. Sun, Y., Liang, D., Wang, X., & Tang, X. (2015). Deepid3: Face recognition with very deep neural networks. arXiv preprint arXiv:1502.00873. Tian, Y., Kanade, T., & Cohn, J. F. (2011). Facial expression recognition. In Handbook of face recognition (pp. 487-519). Springer London. White, C. J. (2010). The impact of emotions on service quality, satisfaction, and positive word-of- mouth intentions over time. Journal of Marketing Management, 26(5-6), 381-394. Wyrembelski, A. (2014). Detection of the Selected, Basic Emotion Based on Face Expression Using Kinect. Online: https://pdfs.semanticscholar.org/6d93/4b68079cbb802fd3bcfac2cae2e5e6d4f7b7.pdf (Retrieved 21st Jan. 2018). Yuksel, A. (2007). Tourist shopping habitat: effects on emotions, shopping value and behaviours. Tourism Management, 28(1), 58-69. Zhang, Z. (2012). Microsoft kinect sensor and its effect. IEEE Multimedia, 19(2), 4-10. Zhang, Y. D., Yang, Z. J., Lu, H. M., Zhou, X. X., Phillips, P., Liu, Q. M., & Wang, S. H. (2016). Facial emotion recognition based on biorthogonal wavelet entropy, fuzzy support vector machine, and stratified cross validation. IEEE Access, 4, 8375-8385. Zhou, Y., Xue, H., & Geng, X. (2015). Emotion distribution recognition from facial expressions. In Proceedings of the 23rd ACM international conference on Multimedia (pp. 1247-1250). ACM. Hakan BOZ is Asst. Professor at Usak University's School of Applied Sciences. He carries out research to better understand consumer and employee behaviour in the tourism and hospitality sectors, especially by using equipments such as Accelerometer, GSR, body temperature, Airflow (Breathing), Heart Rate, Eye Tracker, Facial Recognition, Glucometer, ECG, SPO2, Blood Pressure (sphygmomanometer), Electromyography, EEG and fMRI together with Professor Erdogan Koc Dr. Utku KÖSE received the B.S. degree in 2008 from computer education of Gazi University, Turkey as a faculty valedictorian. He received M.S. degree in 2010 from Afyon Kocatepe University, Turkey in the field of computer and D.S. / Ph. D. degree in 2017 from Selcuk University, Turkey in the field of computer engineering. Between 2009 and 2011, he has worked as a Research Assistant in Afyon Kocatepe University. Following, he has also worked as a Lecturer and Vocational School – Vice Director in Afyon Kocatepe University between 2011 and 2012. Between 2012 and 2017, he was Lecturer in Usak University, Turkey and also the Director of the Computer Sciences Application and Research Center at the same university. Currently, he works as Assistant Professor in Suleuyman Demirel University of Isparta, Turkey. His research interest includes artificial intelligence, optimization, chaos theory, distance education, e- learning, computer education, and computer science.