KEDS_Paper_Template Knowledge Engineering and Data Science (KEDS) pISSN 2597-4602 Vol 4, No 2, December 2021, pp. 97–104 eISSN 2597-4637 https://doi.org/10.17977/um018v4i22021p97-104 ©2021 Knowledge Engineering and Data Science | W : http://journal2.um.ac.id/index.php/keds | E : keds.journal@um.ac.id This is an open access article under the CC BY-SA license (https://creativecommons.org/licenses/by-sa/4.0/) KEDS is Sinta 2 Journal (https://sinta.kemdikbud.go.id/journals/detail?id=6662) accredited by Indonesian Ministry of Education, Culture, Research, and Technology Melanoma Classification based on Simulated Annealing Optimization Neural Network Edi Jaya Kusuma a, 1, *, Ika Pantiawati a, 2 , Sri Handayani b, 3 a Department of Medical Record and Health Information, Faculty of Health Science, Universitas Dian Nuswantoro, Semarang 50131, Indonesia b Department of Public Health, Faculty of Health Science, Universitas Dian Nuswantoro, Semarang 50131, Indonesia 1 edi.jaya.kusuma@dsn.dinus.ac.id*; 2 ikapantia13@dsn.dinus.ac.id; 3 sri.handayani@dsn.dinus.ac.id * corresponding author I. Introduction Cancer appears because of the uncontrollable growth of abnormal cells in the human body. Cancer can occur in many parts of the human body, depending on their life habit. Global Cancer Observatory (Globocan) reports 18.1 million cancer cases around the world, with more than 9.1 million cases categorized as mortality cases in 2018 [1]. Moreover, from the same report provider, in 2020, the total cancer cases increased to 19.3 million, where about 10 million were classified as deaths cases [2]. Due to these reports, it can be summarized that cancer cases are increasing and spreading globally every year. One cancer disease that commonly arises in the country with a high UV (Ultraviolet) light index is skin cancer [2]. Skin cancer is often caused by high exposure to ultraviolet light such as sunlight directly to the skin. Based on their invasion time and damage level towards the body, skin cancer can be divided into two types which are melanoma and nonmelanoma. Melanoma is categorized as malignant skin cancer that can be threatened human life [3]. The invasion state can be identified from the appearance of pigment cells called melanocytes in the form of dark skin lesions. However, the skin lesion color can differ in several cases depending on the number of changed pigment cells [4]. Unfortunately, benign skin cancer has a similar appearance compared to malignant. Thus, it is crucial to identify whether the skin lesion is melanoma or benign early. Furthermore, misclassification of skin cancer can lead to severe clinical outcomes. Many researchers have been conducted several studies based on the advanced technology in images processing and artificial intelligence fields to identify cancer occurrence in early stages. The earliest the cancer was identified, the higher the patient recovered. Gautam and Raman [5] proposed ARTICLE INFO A B S T R A C T Article history: Submitted 15 December 2021 Revised 22 December 2021 Accepted 28 December 2021 Published online 31 December 2021 Technology development in image processing and artificial intelligence leads to the high demand for smart systems, especially in the health sector. Cancer is one of the diseases with the highest mortality cases worldwide. Melanoma is one of the cancers commonly caused by high exposure to UV light. The earliest the melanoma is identified, the higher the patient's chance of recovering. Therefore, this study proposes melanoma detection based on BPNN optimized by a simulated annealing algorithm. This research utilizes PH2 dermoscopic image data containing 200 color digital images in BMP format. The data is processed using color feature extraction techniques to identify the characteristics of each image according to the target data. The color space extraction includes mean RGB, HSV, CIE LAB, YCbCr, and XYZ. The evaluation result showed that the BPNN-SA increased the performance accuracy in classifying skin cancer compared to the original BPNN, with an overall average accuracy of 84.03%. This is an open access article under the CC BY-SA license (https://creativecommons.org/licenses/by-sa/4.0/). Keywords: Cancer Melanoma Neural Network Optimization Simulated Annealing http://u.lipi.go.id/1502081730 http://u.lipi.go.id/1502081046 https://doi.org/10.17977/um018v4i22021p97-104 http://journal2.um.ac.id/index.php/keds mailto:keds.journal@um.ac.id https://creativecommons.org/licenses/by-sa/4.0/ https://sinta.kemdikbud.go.id/journals/detail?id=6662 https://creativecommons.org/licenses/by-sa/4.0/ 98 E.J. Kusuma et al. / Knowledge Engineering and Data Science 2021, 4 (2): 97–104 melanoma classification based on the local binary pattern (LBP) and their variant. The evaluation was performed through several machine learning methods such as decision tree (DT), K-nearest neighbor (KNN), support vector machine (SVM), and random forest (RF). The result shows that the RF method achieved the best performance with 80.3% accuracy. Moreover, other research proposed by Zghal and Derbel [6] utilized PH2 dermoscopic image dataset as the source for developing Computer-Aided Diagnosis (CAD). The proposed method used Asymmetry, Border, Color, and Diameter (ABCD) rules as features extraction from the dataset. The classification of skin cancer based on the PH2 dataset has proposed the combination of color and texture extraction [7]. The features consist of five color spaces: RGB, HSV, LAB, XYZ, and YCbCr. The features were used as input for three different classifiers: K-Nearest Neighbor, Support Vector Machine, and Neural Network. Furthermore, some research applies the optimization method to improve the recognition performance. The use of the same data collection also proposed the skin cancer segmentation based on Fuzzy C-Mean Clustering and skin cancer detection using integrated ANN and Differential Evolution (DE) algorithm as the training optimization method [8]. The proposed method used multi- feature extraction such as Red Green Blue (RGB), Local Binary Pattern (LBP), and Gray Level Co- occurrence Matrix (GLCM). The evaluation of this proposed method generates 97.4% of accuracy, which refers to the optimization of ANN using the DE algorithm to detect skin cancer effectively. This study proposed skin detection based on color features extraction and an optimized neural network through simulated annealing (SA) algorithm. SA may find the global solution using a randomized approach. Moreover, SA optimizes adaptive neuro-fuzzy inference system (ANFIS) outperforms other optimization methods such as hyper-box (HB), backpropagation (BP), and genetic algorithm (GA) with 96.28% of accuracy [9]. Furthermore, this study's proposed color features extraction consists of several color spaces: RGB, HSV, CIE Lab, YCbCr, and XYZ. Then, the proposed classifier implemented a backpropagation neural network (BPNN) in which the weight in each synapse has been optimized using simulated annealing (SA). All of these proposed methods will be evaluated using the PH2 dataset. This study will be presented in four sections. The second section describes the proposed methodology and theoretical foundation. Moreover, the third section explains the result of the proposed method, and the last section concludes the proposed work. II. Methods This study proposed skin detection based on color features extraction and an optimized neural network through simulated annealing (SA) algorithm, BPNN-SA. This research utilizes the PH2 Dermoscopic Image Dataset provided by the Dermatology Service of Hospital Pedro Hispano in Matosinhos, Portugal, and the Universidade do Porto, Ecnico Lisboa [10]. This dataset contains 200 images of skin lesions which separated into 160 benign images (Common nevi and Atypical Nevi) and 40 melanoma images. The image collection is saved in BMP format with 768×560 pixels. Figure 1 shows the samples of the PH2 dataset. Figure 2 shows the research design regarding the proposed method, BPNN-SA. From Figure 2, it can be seen that the first step is to extract each component of color spaces. Several color spaces such as RGB, HSV, CIE Lab, YCbCr, and XYZ are used in this study. (a) (b) (c) Fig. 1. The sample images of the PH2 dataset; (a) Common Nevi; (b) Atypical Nevi; and (c) Melanoma E.J. Kusuma et al. / Knowledge Engineering and Data Science 2021, 4 (2): 97–104 99 RGB is a color space often used in many digital devices such as smartphones, cameras, television, and computers [11]. RGB consists of three layers, where each layer represents the color of red, green, and blue. This color space used 8 bits system for color determination. The image pixel level was distributed between 0 to 255 in each layer. This distribution defines that the lower the pixel number, the darker the color. Other than being used as color representation in digital devices, RGB color space is often used as a color feature in several studies, such as fruit classification [12], image segmentation [13], and object detection [14]. HSV stands for hue, saturation, and value. Unlike RGB, in which the system represents each layer of color, each term in HSV has a specific function in image representation. The Hue value determines the color temperature of the images, saturation represents the color domination in the images, and the value represents the brightness level. HSV is also often used as a color feature [15][16]. YCbCr is a color space that can be broken down into three components: Y, Cb, and Cr. Each component has a different function, where the Y component is luma or the color brightness, then Cb is the blue-green component, and Cr is the red-green component. Commonly, this color space was used in digital video processing [17]. Commission International de I’Eclairage (CIE ) proposed XYZ and Lab colors space. XYZ was proposed in the 1920s and has been used as a graphics standard at the moment [18]. Meanwhile, the Lab color space was proposed by CIE in 1976. This color space was proposed because the color was designed as close as the human vision for color [19]. Moreover, the Lab value was distributed from negative to positive, representing a different color. Afterward, the extracted features were divided using cross-fold validation to generate data training and data testing. The use of cross-validation is to evaluate the capability and the robustness of the proposed model to process the unknown data. The data training was used as a reference to training the Backpropagation Neural Network (BPNN). Fig. 2. Research design 100 E.J. Kusuma et al. / Knowledge Engineering and Data Science 2021, 4 (2): 97–104 Backpropagation Neural Network or BPNN is one of multi-layer perceptron-based artificial neural networks. BPNN has a similar structure to MLP and has an input, hidden, and output layer, as shown in Figure 3. The difference is the learning transfer process where BPNN propagates back and forth [21]. Moreover, BPNN is also known as backward propagation error, where it uses gradient function to identify the class of the data [22]. The result of BPNN is the trained network model. From this trained network model, the weight of each synapse was extracted. The extracted weight will then be optimized using a simulated annealing (SA) algorithm. SA is a metaheuristics searching method that resolves the problem iteratively based on the specific objective function. This algorithm was inspired by the annealing process in metallurgy industries [23]. SA imitates the crystallization process of liquid metal where the process begins with heating the metal until it reaches the exact temperatures desired, then proceed the cooling process gradually and controlled until the metal can keep its optimal form [24]. This weight is considered as initial input in simulated annealing. The searching process generates random interference that moves the “particle” to cool down the initial temperatures. Accept the movement if the “particle” position has a lower energy state. This searching process will last until the process exceeds the defined iteration or the boundary of error rate is fulfilled. The final output was a weight that had been optimized. This weight is then assigned to the previous network model to replace its original weight. Finally, the final model was evaluated using test data to determine the performance of the proposed model. III. Experiment Result and Discussion The skin lesions identification based on BPNN and Simulated annealing optimization have been conducted. The PH2 dataset was used as a reference in the model development. The features for classification have been extracted from PH2 images using RGB, HSV, YCbCr, XYZ, and Lab color spaces. The extraction process produces fifteen features for each skin lesion image. From these features, cross-validation has been applied to generate the training set and testing set. The number of folds used in this research is between 2 to 10. The use of cross-validation is to evaluate the model's capability in handling unlabeled data. Table 1 shows the initial configuration of BPNN. After the BPNN has been configured, then the training set obtained from cross-validation was fed into the neural network model as a training process. The neural network produced by the training Fig. 3. BPNN structure with one hidden layer [20] E.J. Kusuma et al. / Knowledge Engineering and Data Science 2021, 4 (2): 97–104 101 process is then extracted to obtain the weight of each synapse. These weight sets were used as the initial state of the optimization model based on the simulated annealing (SA) algorithm. Table 2 shows the specification of the Simulated Annealing. The optimization aims to obtain the optimized weight for the BPNN model. After these sets of optimized weight were found, it was assigned to the BPNN to replace the original weight. The evaluation was conducted by evaluating the testing set into the optimized model. The proposed model was compared to the original BPNN to describe the difference created by the optimization algorithm from the evaluation process. Table 3 shows the accuracy of the original BPNN. Table 3 presents the original BPNN for each fold in cross-validation. The ninth fold achieved the highest accuracy with 83.83% of accuracy. Meanwhile, the tenth fold achieved the lowest accuracy with 68% of accuracy. Overall, the original BPNN could identify skin cancer with 79.51% of the total accuracy average. Moreover, the evaluation result of the proposed optimized model can be seen in Table 4. The result shows that all of the fold numbers outperformed the performance of the original BPNN. In the BPNN- SA, the sixth fold achieved the highest accuracy with 88.38%, while the lowest was the third fold with 81.81% accuracy. In addition, the result shows that the performance of BPPN-SA reached 84.03% of overall average accuracy. This achievement can be reached due to the capability of simulated annealing in searching the global minima, which is the minimum value of the fitness function. Moreover, the simulated annealing also used a randomized approach is generated the global solution, which theoretically has a broader probability of finding the best solution [9]. Table 1. BPNN Configureation Parameter Setting or specification Hidden layer 1 Nodes 5 Epochs 100 Learning rate 0.001 Activation function Tangent Sigmoid Activation Performance function Mean Square Error (MSE) Table 2. The specification of the Simulated Annealing Parameter Setting or specification Maximum Iteration 50 Initial Temperatures 100 Fitness Function Mean Square Error (MSE) Table 3. The accuracy of the original BPNN No Number of K in Cross-Validation 2 3 4 5 6 7 8 9 10 1 80.00 84.84 88.00 87.50 78.78 85.71 80.00 86.36 80.00 2 76.00 77.27 82.00 80.00 63.63 89.28 72.00 81.81 90.00 3 81.81 74.00 65.00 84.84 89.28 92.00 77.27 75.00 4 78.00 85.00 90.90 78.57 96.00 77.27 75.00 5 87.50 90.90 82.14 80.00 86.36 20.00 6 75.75 64.28 68.00 77.27 15.00 7 78.57 80.00 77.27 80.00 8 80.00 95.45 90.00 9 95.45 75.00 10 80.00 Average 78.00 81.31 80.50 81.00 80.80 81.12 81.00 83.83 68.00 102 E.J. Kusuma et al. / Knowledge Engineering and Data Science 2021, 4 (2): 97–104 The comparison graph between the original BPNN and BPNN-SA can be seen in Figure 4. From this graph, it can be seen that almost in every fold, the BPNN-SA outperforms the original BPNN, especially the sixth and tenth folds, which significantly differ from the original BPNN accuracy result. This case happened due to the BPNN-SA, which utilized the simulated annealing as searching function relies on the randomized approach, which has a broader chance of finding the best solution compared to the original BPNN, which relies on the gradient-based search that has a tendency to be trapped in local minima [25]. This result referred to the simulated annealing algorithm as capable of improving the performance of BPNN in classifying skin cancer. IV. Conclusion The proposed improved skin cancer classification using BPNN and Simulated Annealing methods have been carried out. This research utilizes PH2 dermoscopic image data containing 200 color digital images in BMP format. The data is processed using color feature extraction techniques to identify the characteristics of each image according to the target data. The color space extraction includes mean RGB, HSV, CIE LAB, YCbCr, and XYZ. The experiment was conducted using the cross-fold validation method to evaluate the model robustness toward the appearance of unknown data. The BPNN model was first trained using a training set; then, the trained weight was obtained as the initial weight for simulated annealing. The simulated annealing was used for searching the optimal weight for the BPNN model. The evaluation result showed that the BPNN-SA method increased the performance accuracy in classifying skin cancer compared to the original BPNN method, with an overall average accuracy of 84.03%. Table 4. The accuracy of the BPNN-SA No Number of K in Cross-Validation 2 3 4 5 6 7 8 9 10 1 83.00 80.30 88.00 87.50 84.84 85.71 84.00 90.90 80.00 2 85.00 80.30 84.00 75.00 69.69 89.28 88.00 90.90 95.00 3 84.84 78.00 70.00 87.87 85.71 88.00 77.27 75.00 4 78.00 90.00 93.93 85.71 92.00 77.27 80.00 5 90.00 100.00 82.14 88.00 90.90 70.00 6 93.93 82.14 72.00 77.27 90.00 7 78.57 88.00 81.81 85.00 8 80.00 90.90 95.00 9 81.81 85.00 10 85.00 Average 84.00 81.81 82.00 82.50 88.38 84.18 85.00 84.34 84.00 Fig. 4. Accuracy Comparison of BPNN and BPNN-SA E.J. Kusuma et al. / Knowledge Engineering and Data Science 2021, 4 (2): 97–104 103 Acknowledgment We gratefully thanks to Lembaga Penelitian dan Pengabdian kepada Masyarakat (LPPM) (Grand ID: 056/A38-04/UDN-09/VI/2021) and Faculty of Health Science of Universitas Dian Nuswantoro for supporting this research. Declarations Author contribution All authors contributed equally as the main contributor of this paper. All authors read and approved the final paper. Funding statement This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. Conflict of interest The authors declare no known conflict of financial interest or personal relationships that could have appeared to influence the work reported in this paper. Additional information Reprints and permission information are available at http://journal2.um.ac.id/index.php/keds. Publisher’s Note: Department of Electrical Engineering - Universitas Negeri Malang remains neutral with regard to jurisdictional claims and institutional affiliations. References [1] Kementerian Kesehatan Republik Indonesia, “Kementerian Kesehatan Republik Indonesia,” Kementerian Kesehatan RI. p. 1, 2019. [2] H. Sung et al., “Global Cancer Statistics 2020: GLOBOCAN Estimates of Incidence and Mortality Worldwide for 36 Cancers in 185 Countries,” CA. Cancer J. Clin., vol. 71, no. 3, pp. 209–249, May 2021. [3] R. B. Oliveira, J. P. Papa, A. S. Pereira, and J. M. R. S. Tavares, “Computational methods for pigmented skin lesion classification in images: review and future trends,” Neural Comput. Appl., vol. 29, no. 3, pp. 613–636, Feb. 2018. [4] N. K. Mishra and M. E. Celebi, “An Overview of Melanoma Detection in Dermoscopy Images Using Image Processing and Machine Learning,” pp. 1–15, Jan. 2016. [5] A. Gautam and B. Raman, “Skin Cancer Classification from Dermoscopic Images using Feature Extraction Methods,” in 2020 IEEE Region 10 Conference (TENCON), Nov. 2020, pp. 958–963. [6] N. S. Zghal and N. Derbel, “Melanoma Skin Cancer Detection based on Image Processing,” Curr. Med. Imaging Former. Curr. Med. Imaging Rev., vol. 16, no. 1, pp. 50–58, 2018. [7] S. Oukil, R. Kasmi, K. Mokrani, and B. García‐Zapirain, “Automatic segmentation and melanoma detection based on color and texture features in dermoscopic images,” Ski. Res. Technol., Nov. 2021. [8] M. Kumar, M. Alshehri, R. AlGhamdi, P. Sharma, and V. Deep, “A DE-ANN Inspired Skin Cancer Detection Approach Using Fuzzy C-Means Clustering,” Mob. Networks Appl., vol. 25, no. 4, pp. 1319–1329, 2020. [9] B. Haznedar, M. T. Arslan, and A. Kalinli, “Optimizing ANFIS using simulated annealing algorithm for classification of microarray gene expression cancer data,” Med. Biol. Eng. Comput., vol. 59, no. 3, pp. 497–509, Mar. 2021. [10] E. M. Senan and M. E. Jadhav, “Analysis of dermoscopy images by using ABCD rule for early detection of skin cancer,” Glob. Transitions Proc., vol. 2, no. 1, pp. 1–7, 2021. [11] R. A. Asmara, F. Rahutomo, Q. Hasanah, and C. Rahmad, “Chicken meat freshness identification using the histogram color feature,” Proc. - 2017 Int. Conf. Sustain. Inf. Eng. Technol. SIET 2017, vol. 2018-Janua, no. c, pp. 57–61, 2018. [12] S. Tu, Y. Xue, C. Zheng, Y. Qi, H. Wan, and L. Mao, “Detection of passion fruits and maturity classification using Red- Green-Blue Depth images,” Biosyst. Eng., vol. 175, no. 4, pp. 156–167, Nov. 2018. [13] G. F. Shidik, F. N. Adnan, C. Supriyanto, R. A. Pramunendar, and P. N. Andono, “Multi color feature, background subtraction and time frame selection for fire detection,” Proc. 2013 Int. Conf. Robot. Biomimetics, Intell. Comput. Syst. ROBIONETICS 2013, no. November, pp. 115–120, 2013. [14] M. K. Alsmadi, “Content-Based Image Retrieval Using Color, Shape and Texture Descriptors and Features,” Arab. J. Sci. Eng., vol. 45, no. 4, pp. 3317–3330, 2020. [15] O. R. Indriani, E. J. Kusuma, C. A. Sari, E. H. Rachmawanto, and D. R. I. M. Setiadi, “Tomatoes classification using K-NN based on GLCM and HSV color space,” in 2017 International Conference on Innovative and Creative Information Technology (ICITech), Nov. 2017, pp. 1–6. [16] D. Wu, C. Zhang, L. Ji, R. Ran, H. Wu, and Y. Xu, “Forest Fire Recognition Based on Feature Extraction from Multi- View Images,” Trait. du Signal, vol. 38, no. 3, pp. 775–783, Jun. 2021. [17] D. Chai and A. Bouzerdoum, “A Bayesian approach to skin color classification in YCbCr color space,” in 2000 TENCON Proceedings. Intelligent Systems and Technologies for the New Millennium (Cat. No.00CH37119), 2000, vol. 2, pp. 421–424. [18] J. Schanda, Colorimetry: understanding the CIE system. John Wiley \& Sons, 2007. http://journal2.um.ac.id/index.php/keds http://p2p.kemkes.go.id/penyakit-kanker-di-indonesia-berada-pada-urutan-8-di-asia-tenggara-dan-urutan-23-di-asia/ http://p2p.kemkes.go.id/penyakit-kanker-di-indonesia-berada-pada-urutan-8-di-asia-tenggara-dan-urutan-23-di-asia/ https://doi.org/10.3322/caac.21660 https://doi.org/10.3322/caac.21660 https://doi.org/10.1007/s00521-016-2482-6 https://doi.org/10.1007/s00521-016-2482-6 https://doi.org/10.48550/arXiv.1601.07843 https://doi.org/10.48550/arXiv.1601.07843 https://doi.org/10.1109/tencon50793.2020.9293863 https://doi.org/10.1109/tencon50793.2020.9293863 https://doi.org/10.2174/1573405614666180911120546 https://doi.org/10.2174/1573405614666180911120546 https://doi.org/10.1111/srt.13111 https://doi.org/10.1111/srt.13111 https://doi.org/10.1007/s11036-020-01550-2 https://doi.org/10.1007/s11036-020-01550-2 https://doi.org/10.1007/s11517-021-02331-z https://doi.org/10.1007/s11517-021-02331-z https://doi.org/10.1016/j.gltp.2021.01.001 https://doi.org/10.1016/j.gltp.2021.01.001 https://doi.org/10.1109/siet.2017.8304109 https://doi.org/10.1109/siet.2017.8304109 https://doi.org/10.1016/j.biosystemseng.2018.09.004 https://doi.org/10.1016/j.biosystemseng.2018.09.004 https://doi.org/10.1109/robionetics.2013.6743589 https://doi.org/10.1109/robionetics.2013.6743589 https://doi.org/10.1109/robionetics.2013.6743589 https://doi.org/10.1007/s13369-020-04384-y https://doi.org/10.1007/s13369-020-04384-y https://doi.org/10.1109/innocit.2017.8319133 https://doi.org/10.1109/innocit.2017.8319133 https://doi.org/10.1109/innocit.2017.8319133 https://doi.org/10.18280/ts.380324 https://doi.org/10.18280/ts.380324 https://doi.org/10.1109/tencon.2000.888774 https://doi.org/10.1109/tencon.2000.888774 https://doi.org/10.1109/tencon.2000.888774 https://www.wiley.com/en-us/Colorimetry%3A+Understanding+the+CIE+System-p-9780470049044 104 E.J. Kusuma et al. / Knowledge Engineering and Data Science 2021, 4 (2): 97–104 [19] E. E. Lavindi, E. J. Kusuma, G. F. Shidik, R. A. Pramunendar, A. Z. Fanani, and Pujiono, “Neural network based on GLCM, and CIE L∗a∗b∗ Color Space to Classify Tomatoes Maturity,” Proc. - 2019 Int. Semin. Appl. Technol. Inf. Commun. Ind. 4.0 Retrosp. Prospect. Challenges, iSemantic 2019, pp. 45–50, 2019. [20] E. Kusuma, G. Shidik, and R. Pramunendar, “Optimization of Neural Network using Nelder Mead in Breast Cancer Classification,” Int. J. Intell. Eng. Syst., vol. 13, no. 6, pp. 330–337, Dec. 2020. [21] T. S and M. N, “Detection, Segmentation and Recognition of Face and its Features Using Neural Network,” J. Biosens. Bioelectron., vol. 7, no. 2, 2016. [22] S. Tikoo and N. Malik, “Detection of Face using Viola Jones and Recognition using Back Propagation Neural Network,” Int. J. Comput. Sci. Mob. Comput., vol. 5, no. 5, pp. 288–295, 2016. [23] M. Riza, P. D. Sentia, A. Andriansyah, and A. Muslim, “Simulated Annealing-Based Optimization of Biodegradable Plastic Synthesis,” Int. Rev. Model. Simulations, vol. 12, no. 1, p. 24, Feb. 2019. [24] G. F. Shidik, E. J. Kusuma, S. Nuraisha, and P. N. Andono, “Heuristic vs Metaheuristic method: Improvement of spoofed fingerprint identification in IoT devices,” Int. Rev. Model. Simulations, vol. 12, no. 3, pp. 168–175, 2019. [25] S. S. Behera and S. Chattopadhyay, “A comparative study of back propagation and simulated annealing algorithms for neural net classifier optimization,” Procedia Eng., vol. 38, pp. 448–455, 2012. https://doi.org/10.1109/isemantic.2019.8884307 https://doi.org/10.1109/isemantic.2019.8884307 https://doi.org/10.1109/isemantic.2019.8884307 https://doi.org/10.22266/ijies2020.1231.29 https://doi.org/10.22266/ijies2020.1231.29 https://doi.org/10.4172/2155-6210.1000210 https://doi.org/10.4172/2155-6210.1000210 https://doi.org/10.48550/arXiv.1701.08257 https://doi.org/10.48550/arXiv.1701.08257 https://doi.org/10.15866/iremos.v12i1.15852 https://doi.org/10.15866/iremos.v12i1.15852 https://doi.org/10.15866/iremos.v12i3.17330 https://doi.org/10.15866/iremos.v12i3.17330 https://doi.org/10.1016/j.proeng.2012.06.055 https://doi.org/10.1016/j.proeng.2012.06.055 I. Introduction II. Methods III. Experiment Result and Discussion IV. Conclusion Acknowledgment Declarations Author contribution Funding statement Conflict of interest Additional information References