Journal of Applied Engineering and Technological Science Vol 4(2) 2023: 664-673 664 COMPARISON ANALYSIS OF BRAIN IMAGE CLASSIFICATION BASED ON THRESHOLDING SEGMENTATION WITH CONVOLUTIONAL NEURAL NETWORK Alwas Muis1*, Sunardi2, Anton Yudhana3 Master Program of Informatics Universitas Ahmad Dahlan, Yogyakarta, Indonesia1 Department of Electrical Engineering, Universitas Ahmad Dahlan, Yogyakarta, Indonesia2 3 2207048007@webmail.uad.ac.id1*, sunardi@mti.uad.ac.id2, eyudhana@ee.uad.ac.id3 Received : 30 January 2023, Revised: 23 March 2023, Accepted : 26 March 2023 *Corresponding Author ABSTRACT Brain tumor is one of the most fatal diseases that can afflict anyone regardless of gender or age necessitating prompt and accurate treatment as well as early discovery of symptoms. Brain tumors can be identified using Magnetic Resonance Imaging (MRI) to detect abnormal tissue or cell development in the brain and surrounding the brain. Biopsy is another option, but it takes approximately 10 to 15 days after the inspection, so technology is required to classify the image. The goal of this study is to conduct a comparative analysis of the greatest accuracy value attained while classifying using segmentation with thresholding versus segmentation without thresholding on the CNN method. Images are assigned threshold values of 150, 100, and 50. The dataset consists of 7023 MRI scans of four types of brain tumors: glioma, notumor, meningioma, and pituitary. Without utilising thresholding segmentation, the classification yielded the highest degree of accuracy, 92%. At the threshold of 100, classification by segmentation received the highest score of 88%. This demonstrates that thresholding segmentation during CNN model preprocessing is less effective for brain image classification. Keywords: Convolutional Neural Network, Machine Learning, Magnetic Resonance Imaging, Classification, Brain Tumors 1. Introduction Brain tumors are unnatural and uncontrolled growth of cells in parts of the brain that can result in cancer symptoms in the brain (Badr et al., 2020). Two types of brain tumors are primary brain tumors and secondary brain tumors. Primary brain tumors are abnormal, uncontrolled cell changes originating from the brain cells. In contrast, secondary brain tumors are brain tumors that spread to the brain from cancer of other body parts. Brain tumors can affect anyone regardless of gender and age and can even affect children who are classified as very young. Brain tumor survival rates vary according to the type of brain tumor and the patient's age (Das et al., 2019)(Takahashi et al., 2019). Brain tumors can hugely impact the quality of a patient’s lifetime and affect the whole of life because they have lasting and life-altering physical and psychological impacts (Arabahmadi & Farahbakhsh, 2022). Brain tumors are one of the highest causes of death (Al-Hadidi et al., 2020)(Chahal et al., 2020). People with brain tumors can be diagnosed using scanner technology to see the tissue or cell growth around the brain. Technology that doctors often utilize is Magnetic Resonance Imaging (MRI) (Seetha & Raja, 2018) and CT-Scan (Shi et al., 2021). The first technology that doctors use in detecting brain tumors is MRI (Pashaei et al., 2018). MRI can show images in detail that cannot be seen with other scanner methods to diagnose abnormal tissue growth in the brain (Ismael & Abdel-Qader, 2018). Another method that doctors often use in decision making against brain tumors is biopsy and manual direct observation. The biopsy is the taking of body tissues or organs for examination. Samples in the form of body tissues or organs taken from patients are studied in the laboratory. Laboratory sample research takes a very long time about 10 to 15 days to get information in the diagnosis. Meanwhile, manual diagnosis has a high risk of errors. Therefore, alternative methods that are fast, have a low error rate, and are well documented are needed so that doctors can make informed and accurate decisions (Xie et al., 2022). This requires automatically segmenting the image using computer assistance to shorten the time it takes to diagnose brain tumor disease. Computers can solve these problems by utilizing data that is used as training data or computer mailto:2207048007@webmail.uad.ac.id1 mailto:sunardi@mti.uad.ac.id2 Muis et al… Vol 4(2) 2023 : 664-673 665 learning processes in the introduction of data in detail so that computers can recognize well and accurately if they find the same data model. This process is known as Machine Learning (ML). ML is part of Artificial Intelligence (AI) which can currently be used to solve various problems (D, 2019) of medical imaging (Shi et al., 2021). Machine learning is developed to be able to learn by itself based on the training data provided so that it can detect or classify specific data. One of the branches of ML science that can do this is deep learning. Deep learning is one of the branches of Machine Learning that has recently grown quite rapidly in a few years. One of the deep learning algorithms that has advantages in classifying images is the Convolutional Neural Network (Li et al., 2023). The CNN is an improvement of the traditional artificial neural networks by introducing sliding window techniques of a kernel that can efficiently extract features without using too many parameters (Hadinata et al., 2023). CNN was first created by Fukushima in Tokyo, Japan, although it was still known as "Neocognition" at the time. Gradient-Based Learning was later used to redevelop it for handwriting recognition by Lecun, Bottou, Bengio, and Haffner. Then The ImageNet Large-Scale Visual Recognition Competition (ILSVRC) was won in 2012 by a system that used CNN. As a result of this accomplishment, Alex Krizhevsky demonstrated that the CNN method has been demonstrated to defeat other machine learning approaches for picture object detection. Previous research on the classification of brain tumors has been conducted by Moh’d Rasoul Al-Hadidi, Bayan Alsaaidah, and Mohammed Y. Al-Gawagzeh with the title glioblastomas brain tumour segmentation based on convolutional neural networks. This study classified glioblastoma-type brain tumors using the Convolutional Neural Network (CNN) method. This method is one of the methods found in machine learning. This study successfully detected and classified glioblastoma-type brain tumors with an accuracy of 75%. The problem faced in research using the CNN method in ML is the accuracy of image segmentation which is influenced by the size of the kernel. The smaller the kernel size the b`etter the accuracy will be. However, this has an impact on relatively long turnaround times (Al-Hadidi et al., 2020). Based on the description above, Research on segmentation in brain tumors is very important to do so that it can diagnose, plan treatment, and evaluate outcomes treatment (Zhao et al., 2018). So we did a classified four classes of MRI images of the brain, namely glioma, pituitary, meningioma, and no tumor using the CNN method with thresholding segmentation. 2. Literature Review Several researchers have conducted this research before, namely by testing the CNN method in detecting, classifying, and using the segmentation of accuracy values. Research conducted under the title “batik classification using convolutional neural network with data improvements”. The research uses batik motifs as objects. This research dataset uses the old dataset from the public repository, and the new dataset updates the old dataset by replacing the anomaly data. This research used the CNN method by utilizing the ResNet-18 architecture. The best test results were obtained on the new dataset at 88.88% and the old dataset at 66.14%. This study had no significant effect on accuracy (Meranggi et al., 2022). Research conducted under the title “tifinagh handwritten character recognition using optimized convolutional neural network” using the CNN method to get to know the handwriting. The results of this study received an excellent accuracy score of 99.27% (Niharmine et al., 2022). Research conducted under the title “IoT enabled depthwise separable convolution neural network with deep support vector machine for COVID‑19 diagnosis and classifcation” identify and classify COVID-19 disease based on X-ray images or computed tomography (CT). The results of the study obtained an accuracy of 98.54% (Le et al., 2021). Research conducted under the title “Six skin diseases classification using deep convolutional neural network” They carry out the classification of diseases of the skin. This study used the CNN method. The dataset in this study was taken from the internet totaling 3000 color images. The dataset in this study was divided into 90% training data and 10% testing data. The results of this study got an accuracy of 81.75% (Saifan & Jubair, 2022). The research mentioned above outlines the findings of model testing on CNN's ability in categorizing scans and other sorts of images captured with regular cameras. The study's average Muis et al… Vol 4(2) 2023 : 664-673 666 findings demonstrate CNN's proficiency in categorizing pictures. This study conducted as carried out by several previous studies, namely classifying using the CNN architecture on brain tumor image datasets. This study tested the CNN model against the results of the classification of brain tumor images. However, this study added segmentation techniques before images were included in feature extraction to assess whether the classification model using segmentation was better than the classification method without using segmentation techniques. The segmentation used in this study is thresholding segmentation. 3. Research Methods The methodology in this study as shown in figure 1 consists of the stages of collecting, preparing, sharing, and system design and testing systems. Fig. 1. Research method 1) Datasets The study's good and bad accuracy results are influenced by the dataset used. Each study has a different dataset so that it gets different accuracy values (Meranggi et al., 2022). The dataset in this study used MRI (Angeli et al., 2018) images downloaded from the Kaggle website. 2) Pre-processing Pre-processing is converting an image to have the same identity as converting RGB images to grayscale or resizing images. The dataset obtained has three channels (RGB) with very high imagery, so in this study, it was converted into a grayscale image to simplify the image model. In addition, the dataset used has a different size, so the dataset needs to be resized so that the image size becomes the same. Pre-processing aims to make it easier for the system to process images because the image sizes become the same. Preprocessing of any data improves the performance of applications (Hasan et al., 2020). In addition, thresholding will be carried out as segmentation at the pre-processing stage by providing a thresholding segmentation value of 50, 100, and 150. 3) Data Split The dataset in this study amounted to 7023 brain image data from MRI scans divided into 10% training data and 90% testing data which will be classified into four classes, namely tumor glioma, tumor meningioma, pituitary tumor, and no tumor. Training data is used to train the built model, while testing data is used to evaluate the performance of the model (Tashtoush et al., 2023) (Saifan & Jubair, 2022). 4) Design Model One way to improve accuracy is through CNN model creation (Bekhet et al., 2022). This study used the CNN method to classify MRI image data. Preprocessed images are classified into four classes based on labels determined at the preprocessing stage. The resulting image output from preprocessing becomes input at the CNN model stage. In general, the stages of the CNN model consist of convolutional layers, ReLU layers, pooling, and fully connected. The convolutional layer is a stage in determining the number of filters. This study used a 3x3 filter. The received image input was 150x150, and the stride or filter shift was 1 to the right of the image pixel. ReLU is an operation to introduce nonlinearity and improve the representation of the model. The output value of the convolution result is 0 if the result is negative. Muis et al… Vol 4(2) 2023 : 664-673 667 If the value of the convolution result is positive, then the output value of the convolution result is the value itself. Layer pooling is a reduction in the size of the matrix. There are two types of pooling, namely max pooling to take the highest value and average pooling to take the average of the values produced when the image is convoluted. This study used max pooling size 2x2. Fully connected layer is a collection of convolution processes where all neuron activity from the previous layer is connected all with neurons in the next layer. This layer gets input values from previous processes to determine the most correlated features with the corresponding classification class. Fig. 2. CNN model Figure 2 is the architecture used in this study where at the feature extraction stage, feature extraction was carried out using 3 filters in the first convolution, 8 filters in the second convolution, 16 filters in the third convolution, and 32 filters in the fourth convolution, where each convolution stage uses a 3x3 filter, max pooling 2x2, and stride 1. The prepared brain image is fed into CNN's model and will go through all of its filters. A feature map is created from images that have successfully undergone feature extraction. It is important to flatten or reshape the feature map to make it into a vector that can be utilized as input from a fully-connected layer because the feature map created from feature extraction is still in the form of a multidimensional array. The Fully-Connected layer is usually used in the Multi-layer Perceptron method and aims to process data so that it can be classified. The difference between the Fully-Connected layer and the regular convolution layer is that neurons in the convolution layer are connected only to specific input regions. While the Fully-Connected layer has neurons that are overall connected. However, the two layers still operate the dot product, so their functions are similar. The output of the fully- connected layer will be provided at the last stage in the CNN model softmax. At this stage the image will be classified according to data similar to the previous data at the time of training dataset. The output of softmax will produce four classes: glioma, no tumor, meningioma, and pituitary. 5) Testing Model Testing the created model design is performed by invoking the previously saved value of the model weight and compiling the training model. The evaluation of the CNN model that has been made is carried out with testing data (testing data) and expressed in the form of a confusion matrix. The accuracy of the classification model can be determined by the TP parameters stating true positive, TN as true negative, FP as false positive, and FN as false negative (Samosir et al., 2022). Muis et al… Vol 4(2) 2023 : 664-673 668 Fig. 3. Confusion matrix TP is data that can predict data correctly. Figure 3 shows a class of gliomas with the value C1 being the TP value. TN is the sum of all the values in the confusion matrix. However, TN does not count the calculated class column and row values. An example of a glioma TN value class consists of C6 + C7 + C8 + C10 + C11 + C12 + C14 + C15 + C16. FP is the sum of column values in a class calculated without involving TP values. An example of a TP value in the glioma class is C5 + C9 + C13. FN is the summation of row values in a calculated class without involving TP values. An example of a TP value in the glioma class is C2 + C3 + C4. After getting the value in the confusion matrix, the results can be used to find accuracy with the following equation (Baranwal et al., 2020). 𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 = 𝑇𝑃 + 𝑇𝑁 𝑇𝑃 + 𝐹𝑃 + 𝐹𝑁 + 𝑇𝑁 4. Results and Discussions This research was conducted using python and jupyter programming languages as tools for classifying brain tumors. Before classifying, it is necessary to prepare a library for classifying medical images of brain tumors. After the prepared library is fulfilled, it is necessary to declare colors to make it easier to distinguish the classification process. The libraries needed in this study are matplotlib which functions as a graph drawing, NumPy functions for numerical computational processes, pandas functions to make it easier to process and analyze data, TensorFlow is used to create neural networks, tqdm which functions to display bar processes with simple loops, and sklearn which is used to build machine learning. In addition, label generation is needed so that the machine can quickly determine the class of datasets in classification. Labels are created to make it easier to group classification classes. For example, the glioma label, if there is a data output is worth 0, meningioma if the data output is worth 1, pituitary if the data output is worth 2, and notumor if the data output is worth 3. Classification classes need to be labeled so that machines can distinguish classification classes based on an array of data. All image data that will be classified first is pre-processed so that all datasets are the same size. Furthermore, the dataset was trained using the CNN model to classify the dataset based on the image class that had been previously labeled. This research uses several libraries in python programming to help produce a good, fast, and accurate classification model. The dataset used has an unequal size, so it is necessary to resize it so that the entire image has the same size. The image size given is 150x150 to process the blur media and then remove noise from the image. The image used still consists of 3 image channels (RGB), so it is necessary to do grayscaling to change the image to grayscale or gray. Changing color from RGB to grayscale aims to make it easier for machines to change grayscale to binary. Furthermore, images that have been converted to grayscale are segmented by image. The segmentation used in this study is the thresholding method. Thresholding is a method used to extract images by changing digital values that cross the limit value.. Muis et al… Vol 4(2) 2023 : 664-673 669 The use of the CNN method in this study began after the image was segmented, resulting in the number of pixels of the image being the same as other images. Pre-processing and segmentation images get image output to 150x150 pixels. The output is used as input in the CNN application process in classifying. Images with a size of 150x150 pixels are carried out by a feature extractor process with 3 filters in the first convolution, 8 filters in the second convolution, 16 filters in the third convolution, and 32 filters in the fourth convolution with the same stride and pooling, namely stride 1 and max pool using a size of 2x2 (Saifan & Jubair, 2022). After the feature extraction is complete, the resulting feature is flattened to be classified in a fully connected manner to be classified as four classes of MRI images using softmax with a dropout value of 0.1%. Before the training process is carried out, the dataset needs to be done batch size (Deng et al., 2021), that is the grouping of datasets with a certain amount of data. The first amount of data is called group 1 data, the following number of data is called group 2, and so on until the last group. The dataset training process is carried out with a batch size of 32. The entire dataset is grouped into 32 data in 1 group. Dataset training is performed with epoch 13 or 13 times during the classification process. 1) CNN model no segmentation Fig 4. Image output after preprocessing Figure 4 is the image output of the preprocessing results where the preprocessing does not use thresholding segmentation, so there is no change in the value of image extraction. After the image is successfully preprocessed, it is entered into the CNN architecture for classification. The classification results with CNN architecture without using segmentation scored an accuracy of 85% in the glioma class, 96% notumor in the class, 90% in the meningioma class, and 96% in the pituitary class. These results were obtained from testing data as many as 703 data and 645 data were successfully classified correctly according to their class and 58 data still experienced classification errors. Table 1 - Classification report no segmentation with CNN model Index Classes Data True Percentage (%) False Percentage (%) 0 glioma 170 85 15 1 notumor 203 96 4 2 meningioma 174 90 10 3 pituitary 156 96 4 Table 1 shows that without using segmentation in preprocessing, the CNN model can perform classification with an average accuracy value of 92%, where the system built can classify brain images well. 2) CNN model with segmentation After classifying without segmentation, changes are made by adding segmentation at the preprocessing stage. The segmentation added is thresholding segmentation with a specific value. The first threshold value given at the segmentation stage is 150, where the image that gets a value below 150 will be changed to 0 and a value greater than or equal to 150 will not be changed in value. Muis et al… Vol 4(2) 2023 : 664-673 670 Fig. 5. Output image with thresholding 150 Figure 5 shows segmentation using thresholding 150, where the value in the image below 150 will be changed to 0. The figure shows that many segmentation result values must be increased to the predetermined threshold. The segmentation results are inputted into the CNN architecture for feature extraction and classification. The CNN model used successfully classified 519 data from 703 testing data. Meanwhile, 184 image data still need to be corrected in their classification results, where the lowest percentage class is obtained, 64%, in the meningioma class. Table 2 - Classification report with thresholding segmentation 150 Index Classes Data True Percentage (%) False Percentage (%) 0 glioma 170 72 28 1 notumor 203 86 14 2 meningioma 174 64 36 3 pituitary 156 71 29 Based on table 2, it can be seen that the classification of brain images using the CNN model by adding thresholding segmentation of 150 can perform brain image classification with a percentage of 72% for the glioma class, 86% for the notumor class, 64% for the meningioma class, and 71% in the pituitary class. The average accuracy obtained at the threshold value of 150 is 73%. The second segmentation test is by giving a threshold value of 100 at the preprocessing stage. All preprocessing values that cross that threshold will not be changed, and the value that the threshold passes will be changed to 0. Fig. 6. Output image with thresholding 100 Based on Figure 6, it can be seen that the threshold of 100 more clearly displays the image's shape clearly as the original compared to the threshold value of 150, which is almost invisible to the shape of the image. The imagery is incorporated into the CNN architecture to extract its features based on the architecture built for classification. The classification results with a threshold segmentation of 100 got an average accuracy of 88%. The results of this classification have increased compared to the results of classification using thresholding segmentation 150. Table 3 - Classification report with thresholding segmentation 100 Index Classes Data True Percentage (%) False Percentage (%) 0 glioma 170 88 12 1 notumor 203 96 4 2 meningioma 174 78 22 Muis et al… Vol 4(2) 2023 : 664-673 671 3 pituitary 156 90 10 Based on table 3, it can be seen that the classification in brain images by giving a threshold value of 100 at the preprocessing stage gets an average accuracy value of 88%. In contrast, the glioma class gets an accuracy value of 72%, notumor 96%, meningioma 64%, and pituitary 71%. The third segmentation test is to lower the segmentation value to 50. It aims to analyze whether the segmentation value will be better if it is lower in value compared to the high value segmentation. Fig. 7. Output image with thresholding 50 Figure 7 shows the output of a brain image with a threshold value of 50 on the brain image. Segmented brain image output can separate background images well, but the shape of the brain is no longer recognizable. This is due to the low threshold value given, which makes the image output not look like the image of the human brain. The dataset used in the model test successfully classified 592 data based on their class, and 111 data still erroneously classified results. Table 4 - Classification report with thresholding segmentation 50 Index Classes Data True Percentage (%) False Percentage (%) 0 glioma 170 79 21 1 notumor 203 96 4 2 meningioma 174 74 26 3 pituitary 156 86 14 Based on table 4, the classification results obtained with a thresholding segmentation of 50 got an accuracy value higher than the thresholding segmentation of 100. The results were 84%, where the value was obtained from the classification results in the glioma class 79%, notumor 96%, meningioma 74%, and pituitary 86%. 5. Conclusion Based on the results and discussion shows, the test model built gets a good accuracy score for each test class. All stages of research are carried out with the same stages of preprocessing and feature extraction built on the CNN model. Fig. 85. Grafik classification report 0 20 40 60 80 100 No Segmentation Threshold 150 Threshold 100 Threshold 50 92 73 88 84 Accuracy (%) Muis et al… Vol 4(2) 2023 : 664-673 672 Based on Figure 8, the CNN model without using segmentation can do good classification. The highest accuracy results were obtained in the classification without thresholding segmentation, 92%. Meanwhile, the classification with segmentation got the highest score of 88% at a thresholding of 100. This shows that the CNN model can classify very well without using low or high thresholding segmentation. When compared with previous research conducted (Al-Hadidi et al., 2020) related to the classification of brain tumors got an accuracy value of 75%, while this study got the highest accuracy score of 92% in classification experiments without using threshold values. Meanwhile, the three threshold values are not more accurate than CNN models without segmentation. The results of the study can be concluded that the CNN model without thresholding segmentation can classify brain tumor images with higher accuracy values compared to adding thresholding segmentation to image preprocessing. Based on these findings, we suggest that in performing image classification using the CNN model, there is no need to add thresholding segmentation at the preprocessing stage of the dataset. References Al-Hadidi, M. R., AlSaaidah, B., & Al-Gawagzeh, M. Y. (2020). Glioblastomas brain tumour segmentation based on convolutional neural networks. International Journal of Electrical and Computer Engineering, 10(5), 4738–4744. https://doi.org/10.11591/ijece.v10i5.pp4738-4744 Angeli, S., Emblem, K. E., Due-Tonnessen, P., & Stylianopoulos, T. (2018). Towards patient- specific modeling of brain tumor growth and formation of secondary nodes guided by DTI- MRI. NeuroImage: Clinical, 20(August), 664–673. https://doi.org/10.1016/j.nicl.2018.08.032 Arabahmadi, M., & Farahbakhsh, R. (2022). Deep Learning for Smart Healthcare: A Survey on Brain Tumor. Sensors, 22, 1–27. https://doi.org/https://doi.org/10.3390/ s22051960 Badr, C. E., Silver, D. J., Siebzehnrubl, F. A., & Deleyrolle, L. P. (2020). Metabolic heterogeneity and adaptability in brain tumors. Cellular and Molecular Life Sciences, 77(24), 5101–5119. https://doi.org/10.1007/s00018-020-03569-w Baranwal, S. K., Jaiswal, K., Vaibhav, K., Kumar, A., & Srikantaswamy, R. (2020). Performance analysis of Brain Tumour Image Classification using CNN and SVM. Proceedings of the 2nd International Conference on Inventive Research in Computing Applications, ICIRCA 2020, 537–542. https://doi.org/10.1109/ICIRCA48905.2020.9183023 Bekhet, S., Alghamdi, A. M., & Taj-Eddin, I. (2022). Gender recognition from unconstrained selfie images: a convolutional neural network approach. International Journal of Electrical and Computer Engineering, 12(2), 2066–2078. https://doi.org/10.11591/ijece.v12i2.pp2066-2078 Chahal, P. K., Pandey, S., & Goel, S. (2020). A survey on brain tumor detection techniques for MR images. Multimedia Tools and Applications, 79(29–30), 21771–21814. https://doi.org/10.1007/s11042-020-08898-3 D, A. (2019). Face Recognition using Machine Learning Algorithms. Journal of Mechanics of Continua and Mathematical Sciences, 14(3). https://doi.org/10.26782/jmcms.2019.06.00017 Das, S., Aranya, O. F. M. R. R., & Labiba, N. N. (2019). Brain Tumor Classification Using Convolutional Neural Network. 1st International Conference on Advances in Science, Engineering and Robotics Technology 2019, ICASERT 2019, May 2019, 1–6. https://doi.org/10.1109/ICASERT.2019.8934603 Deng, H., Zhang, W. X., & Liang, Z. F. (2021). Application of BP Neural Network and Convolutional Neural Network (CNN) in Bearing Fault Diagnosis. IOP Conference Series: Materials Science and Engineering, 1043(4), 1–10. https://doi.org/10.1088/1757- 899X/1043/4/042026 Hadinata, P. N., Simanta, D., Eddy, L., & Nagai, K. (2023). applied sciences Multiclass Segmentation of Concrete Surface Damages Using. Hasan, M. M., Ali, H., Hossain, M. F., & Abujar, S. (2020). Preprocessing of Continuous Bengali Speech for Feature Extraction. 2020 11th International Conference on Computing, Communication and Networking Technologies, ICCCNT 2020, 1–4. Muis et al… Vol 4(2) 2023 : 664-673 673 https://doi.org/10.1109/ICCCNT49239.2020.9225469 Ismael, M. R., & Abdel-Qader, I. (2018). Brain Tumor Classification via Statistical Features and Back-Propagation Neural Network. IEEE International Conference on Electro Information Technology, 2018-May, 252–257. https://doi.org/10.1109/EIT.2018.8500308 Le, D. N., Parvathy, V. S., Gupta, D., Khanna, A., Rodrigues, J. J. P. C., & Shankar, K. (2021). IoT enabled depthwise separable convolution neural network with deep support vector machine for COVID-19 diagnosis and classification. International Journal of Machine Learning and Cybernetics, 12(11), 3235–3248. https://doi.org/10.1007/s13042-020- 01248-7 Li, Q., Liu, X., He, Y., Li, D., & Xue, J. (2023). Temperature guided network for 3D joint segmentation of the pancreas and tumors. Neural Networks, 157, 387–403. https://doi.org/10.1016/j.neunet.2022.10.026 Meranggi, D. G. T., Yudistira, N., & Sari, Y. A. (2022). Batik Classification Using Convolutional Neural Network with Data Improvements. International Journal on Informatics Visualization, 6(1), 6–11. https://doi.org/10.30630/joiv.6.1.716 Niharmine, L., Outtaj, B., & Azouaoui, A. (2022). Tifinagh handwritten character recognition using optimized convolutional neural network. International Journal of Electrical and Computer Engineering, 12(4), 4164–4171. https://doi.org/10.11591/ijece.v12i4.pp4164- 4171 Pashaei, A., Sajedi, H., & Jazayeri, N. (2018). Brain tumor classification via convolutional neural network and extreme learning machines. 2018 8th International Conference on Computer and Knowledge Engineering, ICCKE 2018, Iccke, 314–319. https://doi.org/10.1109/ICCKE.2018.8566571 Saifan, R., & Jubair, F. (2022). Six skin diseases classification using deep convolutional neural network. International Journal of Electrical and Computer Engineering, 12(3), 3072– 3082. https://doi.org/10.11591/ijece.v12i3.pp3072-3082 Samosir, R. S., Abdurachman, E., Gaol, F. L., & Sabarguna, B. S. (2022). Brain tumor segmentation using double density dual tree complex wavelet transform combined with convolutional neural network and genetic algorithm. IAES International Journal of Artificial Intelligence, 11(4), 1373–1383. https://doi.org/10.11591/ijai.v11.i4.pp1373-1383 Seetha, J., & Raja, S. S. (2018). Brain tumor classification using Convolutional Neural Networks. Biomedical and Pharmacology Journal, 11(3), 1457–1461. https://doi.org/10.13005/bpj/1511 Shi, F., Wang, J., Shi, J., Wu, Z., Wang, Q., Tang, Z., He, K., Shi, Y., & Shen, D. (2021). Review of Artificial Intelligence Techniques in Imaging Data Acquisition, Segmentation, and Diagnosis for COVID-19. IEEE Reviews in Biomedical Engineering, 14, 4–15. https://doi.org/10.1109/RBME.2020.2987975 Takahashi, M., Miki, S., Fujimoto, K., Fukuoka, K., Matsushita, Y., Maida, Y., Yasukawa, M., Hayashi, M., Shinkyo, R., Kikuchi, K., Mukasa, A., Nishikawa, R., Tamura, K., Narita, Y., Hamada, A., Masutomi, K., & Ichimura, K. (2019). Eribulin penetrates brain tumor tissue and prolongs survival of mice harboring intracerebral glioblastoma xenografts. Cancer Science, 110(7), 2247–2257. https://doi.org/10.1111/cas.14067 Tashtoush, Y., Obeidat, R., Al-Shorman, A., Darwish, O., Al-Ramahi, M., & Darweesh, D. (2023). Enhanced convolutional neural network for non-small cell lung cancer classification. International Journal of Electrical and Computer Engineering (IJECE), 13(1), 1024. https://doi.org/10.11591/ijece.v13i1.pp1024-1038 Xie, Y., Zaccagna, F., Rundo, L., Testa, C., Agati, R., Lodi, R., Manners, D. N., & Tonon, C. (2022). Convolutional Neural Network Techniques for Brain Tumor Classification (from 2015 to 2022): Review, Challenges, and Future Perspectives. Diagnostics, 12(8). https://doi.org/10.3390/diagnostics12081850 Zhao, X., Wu, Y., Song, G., Li, Z., Zhang, Y., & Fan, Y. (2018). A deep learning model integrating FCNNs and CRFs for brain tumor segmentation. Medical Image Analysis, 43, 98–111. https://doi.org/10.1016/j.media.2017.10.002