JURNAL RISET INFORMATIKA Vol. 5, No. 1. December 2022 P-ISSN: 2656-1743 |E-ISSN: 2656-1735 DOI: https://doi.org/10.34288/jri.v5i1.470 Accredited rank 3 (SINTA 3), excerpts from the decision of the Minister of RISTEK-BRIN No. 200/M/KPT/2020 521 MASKED FACE DETECTION AUTOMATION SYSTEM USING MASK THRESHOLD AND VIOLA JONES METHOD Aminurachma Aisyah Nilatika1, Khoerul Anwar*2, Eka Yuniar3 1,2 Teknologi Informasi, 3 Sistem Informasi STMIK PPKIA Pradnya Paramita Malang, Indonesia https://stimata.ac.id/ 1aminurachmaaisyahnilatika@gmail.com, 2*)alqhoir@stimata.ac.id, 3eka@stimata.ac.id (*)Corresponding Author Abstract Reducing or even breaking the chain of Covid-19 virus infections during a pandemic is important. The techniques encouraged are mandatory hand washing, social distancing, and wearing masks. Wearing masks is urgent. Therefore, requiring people to wear masks is the right policy. This study aims to detect people who use or do not use masks by applying the Viola-Jones method. This study modified the threshold algorithm by applying a masking threshold to optimize facial segmentation. Meanwhile, viola jones was built by combining several concepts of Haar Feature, Integral Image, AdaBoost, and classifier Cascade into the main method for detecting objects. The performance of the proposed method for face detection has an accuracy of 95%, a precision of 94.73%, and a recall of 100%. 5. The masked face detection test has an accuracy of 94%, a precision of 100%, and a recall of 90.90% Keywords: Segmentation; Face detection; Masks; Viola jones;Mask tresholder Abstrak Mengurangi atau bahkan memutus rantai infeksi virus Covid-19 dimasa pandemi menjadi hal yang peting. Teknik yang digalakkan adalah wajib cuci tangan, social distancing, dan wajib memakai masker. Memakai masker menjadi urgen, oleh karena itu mewajibkan masyarakat untuk memakai masker adalah kebijakan yang tepat. Penelitian ini bertujuan untuk pendeteksian wajah manusia yang menggunakan masker atau tidak menggunakan masker dengan menerapkan metode mask tresholder dan Viola Jones. Mask tresholder diterapkan pada proses segmentasi wajah untuk mereduksi gangguan pada proses segmentasi wajah. Sementara itu Viola Jones dibangun dengan menggabungkan beberapa konsep Fitur Haar, Citra Integral, AdaBoost, Cascade Pengklasifikasi menjadi sebuah metode utama untuk mendeteksi objek. Kinerja metode yang diusulkan pada deteksi wajah mempunyai akurasi sebesar 95%, presisi 94.73%, dan recall 100%. 5. Pada pengujian deteksi wajah bermasker mempunyai akurasi sebesar 94%, presisi 100% , dan recall 90,90% Kata kunci: Deteksi wajah; Segmentatsi;Masker;Mask tresholder;Viola jones INTRODUCTION The coronavirus that hit various countries began in Wuhan, Hubei, China in 2019, later called Coronavirus disease-2019 (Hui et al., 2020)]. The World Health Organization or World Health Organization (WHO) officially declared the coronavirus as a pandemic on March 9, 2020. This means that this virus has spread widely throughout the world. The Indonesian government has also issued a disaster emergency status from February 29 to May 29, 2020, regarding this virus pandemic for 91 days. In this regard, until November 1, 2021, the Government of the Republic of Indonesia recorded 4,244,761 people were detected positive for COVID-19 while a total of 143,423 deaths (CFR: 3.4%) and a total of 4,089,419 patients had been declared cured from contracting COVID. Until 2022 this virus has not completely disappeared. Therefore, the public's vigilance against it is maintained. The spread of the Covid-19 virus is indicated to mostly occur when someone is symptomatic or symptomatic to others using communication at very close distances and not wearing the perfect PPE equipment. Moreover, another transmission may occur because the infected person transmits the virus but has not experienced transmission symptoms. This transmission is called presymptomatic transmission. P-ISSN: 2656-1743 | E-ISSN: 2656-1735 DOI: https://doi.org/10.34288/jri.v5i1.470 JURNAL RISET INFORMATIKA Vol. 5, No. 1. December 2022 Accredited rank 3 (SINTA 3), excerpts from the decision of the Minister of RISTEK-BRIN No. 200/M/KPT/2020 522 The government has taken many steps to overcome this pandemic, one of which is by socializing the mandatory hand washing movement, social distancing movement, and the mandatory movement to wear masks to reduce or even break the chain of infection with the Covid-19 virus. The mandatory mask movement is a movement that requires all people to wear masks. Several masks are recommended by the government to be used. Among them are cloth masks, which can be used for four hours and then be washed again. The surgical masks and N-95 masks are only reserved for health workers. In Indonesia, many people still ignore warnings and regulations regarding the use of masks whenever they are outside the home, and many are unaware of the importance of carrying out this prevention. Most Indonesians know that wearing a mask is a positive step to reduce the spread of Covid-19, but less than 50% are obedient to follow it. Detecting people wearing masks is easy to do with the human eye, but for computer-based intelligent systems, this is a challenge. There are two main problem s, namely how the computer can detect human faces, and the next problem is how the detected faces can be identified using masks or not wearing masks directly without requiring data training but with high accuracy through photo and video images. Many studies have been conducted to detect faces using various techniques and algorithms to be implemented on devices with limited resources (Wihandika, 2021). Research (Suharso, 2017) to detect facial images conducted several experiments with pixel value thresholds and obtained the best value of 70. Then this image was used to obtain haar features and obtained face detection accuracy of 90.90%. It is different from (Kirana & Isnanto, 2016), which uses PCA for image segmentation to obtain haar features, and the result of face detection is 90.90%. Likewise (Suhery & Ruslianto, 2017) use PCA for facial image segmentation, and the accuracy of the face detection model is 90%. In addition to the several methods that have been presented, the face detection method with haar cascade is quite popular, including those used by (Budiman, 2021); Formatting Citation; Nono Heryana, Rini Mayasari, 2020; Abidin, 2018; Utaminingrum et al., 2017; Javed Mehedi Shamrat et al., 2021; Padilla et al., 2012; Minu et al., 2020; Poorvi Taunk, G. Vani Jayasri, Padma Priya J, 2020; Braganza et al., 2020). Almost all of these studies use binary images obtained by applying a certain threshold value with an accuracy rate of 90% (Kirana & Isnanto, 2016; Suharso, 2017; Suhery & Ruslianto, 2017). Research on masked faces has been carried out using various methods, including using CNN (Sakshi et al., 2021; Vu et al., 2022), deep learning (Monica M, 2021; Yadav, 2020),(Mufid Naufal Baay, Astria Nur Irfansyah, 2021) with an accuracy of 82%, while (Γ–ztΓΌrk et al., 2021) with an accuracy of 85%. Several studies used the jones viola (Suharso, 2017) with 88% accuracy researchers (Putri et al., 2019) with 90% accuracy. Another researcher who used Viola Jones for face detection was (Hassan & Dawood, 2022)(Florestiyanto et al., 2020; Pratama, 2022; Suharso, 2017) for real-time with 67.6% accuracy. Research by (Fendi et al., 2020) has 90.9% accuracy (Karim Sujon et al., 2022). Therefore, it is possible to develop a threshold method that can detect faces to increase the accuracy of face detection This study aims to detect human faces with masks or not. This article applies the Jones viola and threshold mask. Mask threshold to reduce interference from segmentation results. Meanwhile, Viola Jones is built to detect masked faces. RESEARCH METHODS The research framework developed to achieve the objectives consists of three stages. The first stage is preprocessing the input image by converting the color and size dimensions. The flow of the research framework is shown in Figure 1. Figure 1. Workflow of The Masked Face Detection System The output of the process is a grey-level image. The second stage is face detection by applying the Haar Feature, Integral Image, AdaBoost, and Cascade JURNAL RISET INFORMATIKA Vol. 5, No. 1. December 2022 P-ISSN: 2656-1743 |E-ISSN: 2656-1735 DOI: https://doi.org/10.34288/jri.v5i1.470 Accredited rank 3 (SINTA 3), excerpts from the decision of the Minister of RISTEK-BRIN No. 200/M/KPT/2020 523 Classifier methods. The output of the second stage is the face-detected image. The third or final stage is detecting the face image with a mask or not. The development of the proposed threshold is discussed at this stage. Preprocessing The input data is obtained from two sources, namely from the webcam and downloaded from the internet. Data in the form of images or videos. At this stage, two initial treatments are carried out on the image: color conversion and image size changes. The color conversion process is carried out from three RGB color spaces into one grey-level space. The grey level was chosen because it supports the next process. The grey level has one value component, making it easier to perform pixel computations (Suhery & Ruslianto, 2017). Three color spaces, r (red), g (green), and b (blue), into the grey level image that is accommodated in the variable s, then the conversion takes the average of r, g, and b as written in Equation 1 𝑠 = ( π‘Ÿ+𝑔+𝑏 3 ) ........................................................................ (1) Image size change is a change in image resolution. The process is done by reducing the image's dimensions for the number of pixels to be less by sampling distanced pixels. This process is carried out because the images obtained from the webcam used have a resolution of 480x640 pixels, and the images downloaded from the internet have different resolutions, affecting the speed of calculating the next process. Therefore, uniform image dimensions are needed. At this stage, the process of changing the size to 340x240 pixels for the image obtained from the webcam, while the image obtained from the process file changes the size to 50% of the original resolution. Data processing The first step is to determine the unique haar features as the key to the image. Haar has the advantage of a fast computing process. The Haar features (Poorvi Taunk,G. Vani Jayasri,Padma Priya J, 2020; VAIBHAV, 2020)used are shown in Figure 2. Figure 2. Features of Haar This can happen because the computation is calculated based on the features contained in the grid frame and not for all pixels in an image. In order to obtain the desired haar features, several main parts of the human face are selected. The parts that are haar features in this article are eye features, nose features, and mouth features. These three sections were chosen because they indicated that each face has a different geometry. The feature selection process can be seen in Figure 3. Figure 3. Eye, Nose, and Mouth Features The process after feature selection is continued in the image integration process. In this step, the pixels in the area read as Haar features are summed. This method uses four specific sub-areas of a larger whole area. Therefore, this method is considered capable of using features efficiently. The pixel values of the integral image are shown as sub- images, as shown in Figure 4 Figure 4. Integral Image Section L1, L2, L3, and L4 are sub-images of the input. The number of pixels for A is calculated(Jones, 2004) using Equation 2. L1, L2, L3, and L4 are sub-images of the input. The number of pixels for A is calculated[20] using Equation 2. 𝐴 = 𝐿4 + 𝐿1 βˆ’ (𝐿2 + 𝐿3) ..................................... (2) Meanwhile, the change in value from the input image to the integral image is calculated using Equation 3. 𝑠(π‘₯, 𝑦) = 𝑖(π‘₯, 𝑦) + 𝑠(π‘₯, 𝑦 βˆ’ 1) + 𝑠(π‘₯ βˆ’ 1, 𝑦) βˆ’ 𝑠(π‘₯ βˆ’ 1, 𝑦 βˆ’ 1) .................................................... (3) P-ISSN: 2656-1743 | E-ISSN: 2656-1735 DOI: https://doi.org/10.34288/jri.v5i1.470 JURNAL RISET INFORMATIKA Vol. 5, No. 1. December 2022 Accredited rank 3 (SINTA 3), excerpts from the decision of the Minister of RISTEK-BRIN No. 200/M/KPT/2020 524 For s(x,y) is the cumulative sum for the integral image. The next process is to determine certain features used to set the threshold value using the Adaboost method. When managing a strong classifier, weight is added to the weak classifier, which is then integrated into one to analyze whether the image contains an object. The weak classifier is the prediction result with an inaccurate level of truth. Meanwhile, the steps taken by AdaBoost to establish a strong classifier are: normalizing the weights to obtain a probability distribution value or a candidate for a weak classifier, evaluating each candidate for a weak classifier, selecting a candidate for a weak classifier with the smallest error rate, and then selecting it as a weak classifier. Classify all training data using the previously obtained weak classifier and re-weight the data. Increase the weight of each misclassified data, and then reduce the weight (return to the initial weight) of all data appropriately. It is hoped that any misclassification can be monitored and corrected by the weak classifier at the next stage. The final classifier is obtained by combining all weak classifiers from each step increase. Graded classification using the cascade classifier method. The cascade classifier has three levels of classification (Suharso, 2017). The process flow to determine the presence or absence of facial features in the image is shown in Figure 5. Figure 5. Graded Classification Flow Segmentation The process begins with an edge detection process at the end of the color gradation that limits two homogeneous images that show different brightnesses. This study was conducted to determine the difference in intensity between the background and the face area for the entire specified digital image. What is desired from this process is to obtain a clear boundary line between one area and another in the image (Anwar & Setyowibowo, 2021). Then proceed with assigning a threshold value to emphasize the image of objects and non-objects by changing the grey-level image into two colors, black and white. In this research, the threshold value process is done by filtering the pixel value. Values that are less than the benchmark will be presented in black. At the same time, the pixel value above the benchmark is presented in a light or white color. The threshold value is calculated on the average value of the sum of the maximum value of fmask and the minimum value of fmin, and the calculation formula is used based on Equation 4. 𝑇 = π‘“π‘šπ‘Žπ‘˜π‘ +π‘“π‘šπ‘–π‘› 2 ......................................................... (4) The explanation of the variables in Equation (4)T is the threshold value, fmax is the highest pixel value, and fmin is the minimum pixel value. This segmentation process is a type of operation that aims to break up an image into several segments with certain criteria (Anwar et al., 2021). This type of operation is closely related to pattern identification. In this study, segmentation is carried out based on the facial texture area because, in the initial image processing, we only want to focus on the core facial composition from the eyes to the chin, as well as the location of the mask that should be. The results of facial image segmentation identify whether there are faces with masks. The system works to separate images identified as having the characteristics of the eyes, nose, and mouth objects inserted into a marker box frame. The pixel value of this rectangular box is stored in a variable used to disguise this segmentation. However, the segmentation results have problems, especially when operated on video. The disturbance occurs in the background of the object. There are many white spots. Therefore, this study improved the segmentation results by applying convolution to the background. Convolution in this study is used to reduce the disturbances after object segmentation. The essence of the convolution method is to multiply two matrices. In this article, the two matrices used are binary image matrices from segments with thresholding using Equation 4, while the second matrix is the RGB image matrix. The method used is a segmented binary image edge detected as a mask. The process is done by multiplying each RGB component with a mask image in f(x,y) binary format. The process formulation is written in Equation 5 (Anwar et al., 2021). The r(), g(), and b() matrices have pixel values from 0 to 255, (xi,yj) = 0…255. At the same time, f(xi,yj) is a binary image with value [0 1]. Rf(xi,yj) = r (xi,yj) * f (xi,yj) Gf(xi,yj) = g (xi,yj) * f (xi,yj) ........................................... (5) Bf(xi,yj) = b (xi,yj) * f (xi,yj) JURNAL RISET INFORMATIKA Vol. 5, No. 1. December 2022 P-ISSN: 2656-1743 |E-ISSN: 2656-1735 DOI: https://doi.org/10.34288/jri.v5i1.470 Accredited rank 3 (SINTA 3), excerpts from the decision of the Minister of RISTEK-BRIN No. 200/M/KPT/2020 525 The flow of the segmentation on the face without a mask can be seen in Figure 6, and the segmentation on the face with a mask can be seen in Figure 7. Figure 6. Unmasked Face Image Method Figure 7. Masked Face Image Method Dataset Retrieval of image data was used to test the study results with 20 images obtained consisting of 10 real-time webcam images and ten downloaded images. The image specifications are: webcam images with a resolution of 480 x 640 pixels, images from uploaded files with different resolutions placed in the program folder, images containing more than one face using masks, images containing more than one face not wearing masks, the image contains a face that uses a cloth mask, a medical mask and does not use a mask, an image contains a face using a face shield. Using the diverse nature of the data, it is hoped that information about the proposed method's ability can be obtained properly. Data analysis technique The output of this process is to display the detected image as a face or part of the face detected with RGB color and the background to be black. Tests on the proposed method are mapped using a confusion matrix in Table 1, consisting of precision, recall, and system accuracy in detecting the input image. Each calculation for each parameter is 1. π‘π‘Ÿπ‘’π‘π‘–π‘ π‘–π‘œπ‘› = 𝑇𝑃 (𝑇𝑃+𝐹𝑃) .................................................... (6) 2. π‘Ÿπ‘’π‘π‘Žπ‘™π‘™ = 𝑇𝑃 (𝑇𝑃+𝐹𝑁) ............................................................. (7) 3. π‘Žπ‘π‘π‘’π‘Ÿπ‘Žπ‘π‘¦ = 𝑇𝑃+𝑇𝑁 (𝑇𝑃+𝐹𝑃+𝑇𝑁+𝐹𝑁) ................................ (8) RESULTS AND DISCUSSION Face Detection The stages of facial segmentation with the proposed method are shown in Figure 8. The character of the image from the face segmentation is shown in Figure 8.a. is the input image without a binary mask using Equation 4, which shows the segmentation results are not perfect in Figure 8. b. In this study, the mask threshold method was added by applying Equation 5 to obtain better results with the background. The method has been tested with five real-time images (webcam) and five downloaded images. The proposed method can segment 10 test images correctly (100%). The results obtained are shown in Figure 8. c. Figure 8. Face Detection Application of the algorithm used in the masked image segmentation process, as shown in Figure 9 Figure 9. Masked Face Image The result of masked image segmentation is shown in Figure 9. b. Binary image segmentation using Equation 4 showed that the desired object still has much interference. To get better results, the mask transcoder convolution method is applied. The proposed method has been tested with five real-time images (webcams) and five downloaded images. The method can correctly segment 10 test images. Examples of performance, image 9a is the input image, image 9b is the result of the old method, and image 9c is the result of the proposed method. Tests on the face detection method with the Viola-Jones and Mask Tresholder methods were conducted on real-time webcam images and offline images (upload files). An example of correct Detection is shown in Table 1. Table 1. The Results Of The Masked Face Detection Test P-ISSN: 2656-1743 | E-ISSN: 2656-1735 DOI: https://doi.org/10.34288/jri.v5i1.470 JURNAL RISET INFORMATIKA Vol. 5, No. 1. December 2022 Accredited rank 3 (SINTA 3), excerpts from the decision of the Minister of RISTEK-BRIN No. 200/M/KPT/2020 526 The webcam image comprises a single unmasked image and an unmasked multiple-face image with 5 test images. Meanwhile, the system was tested with webcam images for face detection with masks for single and multiple-face images. Tests for face detection without masks with real-time images with two single images and multiple face images totaling five tests. The system performance detected five tests correctly. Examples of performance as shown in Table 1 No. 1 and No. 2. Testing with two uploaded images consisting of a single face image and multiple face images. The system performance can detect two tests correctly. In the uploaded image in Table 1 No. 4, there is a face mask, but it does not cover the haar features (mouth, nose, and eyes) used in the proposed system, but the mask is attached to the neck. Therefore the system detects it as an unmasked face. Tests for masked face detection were carried out with four real-time images. The performance of the proposed system can detect four tests correctly. An example of system performance in detecting masked faces is shown in Table 1, No 3. Then the system is tested with six uploaded images. The test results showed that five images were correctly detected as a masked face and 1 test image failed to be detected correctly. The system's failure in detecting this is caused by the inability to segment the facial area. Testing the model with a real-time image containing both masked and unmasked faces. In testing with this image data type, the performance of the proposed system can detect faces with masks on the test image correctly. However, the system did not properly detect the unmasked face because it recognized it as a masked face. The results of the system performance as shown in Table 1 No. 4. CONCLUSIONS AND SUGGESTIONS Conclusion Based on the test results of the experiments, it can be concluded that the masking threshold and Viola Jones methods can be implemented properly to detect facial images. The threshold mask applied for segmentation can reduce the disruption of the segmentation process. The proposed model has an accuracy of 95%. While the accuracy obtained for detecting masked faces is 94%. This threshold mask method can be applied for various purposes of color image segmentation. Suggestion The object in this article is a stationary person. Therefore it can be developed for masked face detection in moving humans in complex open spaces or indoors. REFERENCES Abidin, S. (2018). Deteksi Wajah Menggunakan Metode Haar Cascade Classifier Berbasis Webcam Pada Matlab. Jurnal Teknologi Elekterika, 15(1), 21. https://doi.org/10.31963/elekterika.v15i1.2 102 Anwar, K., & Setyowibowo, S. (2021). Segmentasi Kerusakan Daun Padi pada Citra Digital. Jurnal Edukasi Dan Penelitian Informatika (JEPIN), 7(1), 39. https://doi.org/10.26418/jp.v7i1.42331 Anwar, K., Yunus, M., & Sujito, S. (2021). Segmentasi Citra Warna Otomatis Rambu Lalu Lintas dengan Penerapan Mask Thresholder. Jurnal Edukasi Dan Penelitian Informatika (JEPIN), 7(3), 481. https://doi.org/10.26418/jp.v7i3.49969 Budiman, B., Lubis, C., & Perdana, N. J. (2021). Pendeteksian Penggunaan Masker Wajah Dengan Metode Convolutional Neural Network. Jurnal Ilmu Komputer Dan Sistem Informasi, 9(1), 40–47. https://doi.org/10.24912/jiksi.v9i1.11556 Fendi, S., Triyanto, A., Hidayati, N., & P, D. E. (2020). Face Detection Dengan Menggunakan No Detection Result Webcam File Upload 1 2. 3. 4. JURNAL RISET INFORMATIKA Vol. 5, No. 1. December 2022 P-ISSN: 2656-1743 |E-ISSN: 2656-1735 DOI: https://doi.org/10.34288/jri.v5i1.470 Accredited rank 3 (SINTA 3), excerpts from the decision of the Minister of RISTEK-BRIN No. 200/M/KPT/2020 527 Algoritma Viola Joness. Stmik Amikom, January. https://doi.org/10.13140/RG.2.2.35193.216 06 Florestiyanto, M. Y., Pratomo, A. H., & Sari, N. I. (2020). Penguatan Ketepatan Pengenalan Wajah Viola-Jones Dengan Pelacakan. Teknika, 9(1), 31–37. https://doi.org/10.34148/teknika.v9i1.241 Hassan, B. A., & Dawood, F. A. A. (2022). Facial image detection based on the Viola-Jones algorithm for gender recognition. International Journal of Nonlinear Analysis and Applications, 6822(November), 1–7. https://doi.org/10.22075/IJNAA.2022.7130 Hui, D. S., I Azhar, E., Madani, T. A., Ntoumi, F., Kock, R., Dar, O., Ippolito, G., Mchugh, T. D., Memish, Z. A., Drosten, C., Zumla, A., & Petersen, E. (2020). The continuing 2019-nCoV epidemic threat of novel coronaviruses to global health β€” The latest 2019 novel coronavirus outbreak in Wuhan, China. International Journal of Infectious Diseases, 91, 264–266. https://doi.org/10.1016/j.ijid.2020.01.009 Javed Mehedi Shamrat, F. M., Majumder, A., Antu, P. R., Barmon, S. K., Nowrin, I., & Ranjan, R. (2021). Human Face Recognition Applying Haar Cascade Classifier. Human Face Recognition Applying Haar Cascade Classifier," in Lecture Notes in Networks and Systems, (March), 1–15. https://doi.org/10.1007/978- 981-16-5640-8_12 Jones, M. (2004). Robust Real-time Object Detection. International Journal of Computer Vision, 57(2), 137–154. https://doi.org/10.1023/B:VISI.0000013087. 49260.fb Karim Sujon, M. R., Hossain, M. R., Al Amin, M. J., Bepery, C., & Rahman, M. M. (2022). Real-time face mask detection for COVID-19 prevention. 2022 IEEE 12th Annual Computing and Communication Workshop and Conference, CCWC 2022, November, 341–346. https://doi.org/10.1109/CCWC54503.2022.9 720764 Kirana, C., & Isnanto, B. (2016). Face Identification For Presence Applications Using Violajones and Eigenface Algorithm. Jurnal Sisfokom (Sistem Informasi Dan Komputer), 5(2), 7–14. https://doi.org/10.32736/sisfokom.v5i2.189 Minu, M. S., Arun, K., Tiwari, A., & Rampuria, P. (2020). Face recognition system based on haar cascade classifier. International Journal of Advanced Science and Technology, 29(5), 3799–3805. Monica M, K. R. S. (2021). IRJET- Deep Learning Technique to Detect Face Mask in the Covid- 19 Pandemic Period. Irjet, 8(8). Mufid Naufal Baay, Astria Nur Irfansyah, M. A. (2021). Sistem Otomatis Pendeteksi Wajah Bermasker Menggunakan Deep Learning. Jurnal Teknik ITS, 10(1), 64–70. Nono Heryana,Rini Mayasari, K. A. B. (2020). Penerapan Haar Cascade Classification Model untuk Deteksi Wajah , Hidung , Mulut , dan Mata Menggunakan Algoritma Viola-Jones. Jurnal Ilmu Komputer Dan Teknologi Informasi, 5(1), 21–25. Γ–ztΓΌrk, G., Eldoğan, O., Karayel, D., & Atali, G. (2021). Face Mask Detection on LabVIEW. Artificial Intelligence Theory and Applications, 1(2), 9– 18. https://dergipark.org.tr/en/pub/aita/issue/ 70791/1137977 Padilla, R., Ferreira, C., & Filho, F. C. (2012). Evaluation of Haar Cascade Classifiers for Face Detection Optimum Baseline Wander removal in ECG signals View project. International Journal of Computer, Electrical, Automation, Control and Information Engineering, 6(4), 466–469. Palli, G. H., Shah, A. A., Chowdhry, B. S., Hussain, T., Ur Rehman, U., & Mirza, G. F. (2021). Recognition of Train Driver's Attention Using Haar Cascade. HONET 2021 - IEEE 18th International Conference on Smart Communities: Improving Quality of Life Using ICT, IoT and AI, December, 132–136. https://doi.org/10.1109/HONET53078.2021 .9615452 Poorvi Taunk, G. Vani Jayasri,Padma Priya J, N. S. K. (2020). Face Detection using Viola Jones with Haar Cascade. Test Engineering and Management Β·, 83(11), 131–134. https://doi.org/10.46501/ijmtst061124 Pratama, M. A. H. P. (2022). Alat Pendeteksi Wajah Mahasiswa Universitas Trunojoyo Madura (UTM) Menggunakan Metode Viola-Jones. ALINIER: Journal of Artificial Intelligence & Applications, 2(2), 50–60. https://doi.org/10.36040/alinier.v2i2.4290 Putri, R. E., Matulatan, T., & Hayaty, N. (2019). Sistem Deteksi Wajah Pada Kamera Realtime dengan menggunakan Metode Viola Jones. Jurnal Sustainable: Jurnal Hasil Penelitian Dan Industri Terapan, 8(1), 30–37. https://doi.org/10.31629/sustainable.v8i1.5 26 Sakshi, S., Gupta, A. K., Singh Yadav, S., & Kumar, U. (2021). Face Mask Detection System using CNN. 2021 International Conference on Advance Computing and Innovative P-ISSN: 2656-1743 | E-ISSN: 2656-1735 DOI: https://doi.org/10.34288/jri.v5i1.470 JURNAL RISET INFORMATIKA Vol. 5, No. 1. December 2022 Accredited rank 3 (SINTA 3), excerpts from the decision of the Minister of RISTEK-BRIN No. 200/M/KPT/2020 528 Technologies in Engineering, ICACITE 2021, March 212–216. https://doi.org/10.1109/ICACITE51222.202 1.9404731 Suharso, A. (2017). Pengenalan Wajah Menggunakan Metode Viola-Jones dan Eigenface Dengan Variasi Posisi Wajah Berbasis Webcam. Techno Xploreβ€―: Jurnal Ilmu Komputer Dan Teknologi Informasi, 1(2). https://doi.org/10.36805/technoxplore.v1i2. 107 Suhery, C., & Ruslianto, I. (2017). Identifikasi Wajah Manusia untuk Sistem Monitoring Kehadiran Perkuliahan menggunakan Ekstraksi Fitur Principal Component Analysis (PCA). Jurnal Edukasi Dan Penelitian Informatika (JEPIN), 3(1), 9. https://doi.org/10.26418/jp.v3i1.19792 Utaminingrum, F., Primaswara, R., & Arum Sari, Y. (2017). Image processing for rapidly eye detection based on robust haar sliding window. International Journal of Electrical and Computer Engineering, 7(2), 823–830. https://doi.org/10.11591/ijece.v7i2.pp823- 830 VAIBHAV, H. (2020). Face Identification using Haar cascade classifier. Medium, 8(5), 1–5. https://medium.com/geeky-bawa/face- identification-using-haar-cascade-classifier- af3468a44814 Vu, H. N., Nguyen, M. H., & Pham, C. (2022). Masked face recognition with convolutional neural networks and local binary patterns. Applied Intelligence, 52(5), 5497–5512. https://doi.org/10.1007/s10489-021- 02728-1 Wihandika, R. (2021). Deteksi Masker Wajah Menggunakan Metode Adjacent Evaluation Local Binary Patterns. Jurnal RESTI (Rekayasa Sistem Dan Teknologi Informasi), 5(4), 705– 712. https://doi.org/10.29207/resti.v5i4.3094 Yadav, S. (2020). Deep Learning based Safe Social Distancing and Face Mask Detection in Public Areas for COVID-19 Safety Guidelines Adherence. International Journal for Research in Applied Science and Engineering Technology, 8(7), 1368–1375. https://doi.org/10.22214/ijraset.2020.3056 0