Journal of Mechanical Engineering Science and Technology ISSN 2580-0817 Vol. 6, No. 1, July 2022, pp. 40-47 40 DOI: 10.17977/um016v6i12022p040 Machine Vision for the Various Road Surface Type Classification Based on Texture Feature Susi Marianingsih1*, Widodo1, Marla Sheilamita S. Pieter1, Evanita Veronica Manullang1, Hendry Y. Nanlohy2 1Faculty of Computer Science and Management, Jayapura University of Science and Technology, 99351, Indonesia 2Department of Mechanical Engineering Jayapura University of Science and Technology, 99351, Indonesia *Corresponding author:ssmarianingsih@gmail.com Article history: Received: 9 April 2022 / Received in revised form: 10 Mei 2022 / Accepted: 10 June 2022 ABSTRACT The mechanized ability to specify the way surface type is a piece of key enlightenment for autonomous transportation machine navigation like wheelchairs and smart cars. In the present work, the extracted features from the object are getting based on structure and surface evidence using Gray Level Co-occurrence Matrix (GLCM). Furthermore, K-Nearest Neighbor (K-NN) Classifier was built to classify the road surface image into three classes, asphalt, gravel, and pavement. A comparison of KNN and Naïve Bayes (NB) was used in present study. We have constructed a road image dataset of 450 samples from real-world road images in the asphalt, gravel, and pavement. Experiment result that the classification accuracy using the K-NN classifier is 78%, which is better as compared to Naïve Bayes classifier which has a classification accuracy of 72%. The paving class has the smallest accuracy in both classifier methods. The two classifiers have nearly the same computing time, 3.459 seconds for the KNN Classifier and 3.464 seconds for the Naive Bayes Classifier. Copyright © 2022. Journal of Mechanical Engineering Science and Technology. Keywords: Co-occurrence matrix, image data set, K-Nearest Neighbor, Naïve Bayes, road surface types I. Introduction For now, several studies about the use of machine learning are very useful in developing the automotive industry [1], [2], and this fact shows that the development of the automotive industry does not always depend on the field of mechanical engineering i.e. fuel and engine performance [3-5], but also on the field of machine learning of computer vision. Through computer vision, a machine vision is created to detect the way surface object, which is the crucial piece of information for automated machine technologies users [6]. Determination of road surface type is crucial, especially to develop security aspects for transportation users and minimize congestion and accidents [7]. Image classification is one way that can be used to solve problems that have the potential to arise in automatic driving technologies. Information about the feature texture of road surface types is one part of image classification that can be applied to analyze the type of road surface [8], [9]. However, these different road models have the potential to cause problem, therefore, a better and more accurate method is needed to classify road surface conditions [10]. 41 Journal of Mechanical Engineering Science and Technology ISSN 2580-0817 Vol. 6, No. 1, July 2022, pp. 40-47 Marianingsih et al. (Machine Vision for the Various Road Surface Type Classification) Some researchers use the GLCM classification method for describing texture details into spatial domain and edge images [9], [11]. Moreover, GLCM is also used to detect rock images in the road type classification based on visual data, the GLCM method is also used to characterize way surfaces by several aspects such as texture [12], color [13], and border features of riders’ sight image to coach a neural network of objects. In addition to GLCM, there is also the use of several classification methods such as convolutional neural networks [14] and support vector machine [15] to detect, but, the accuracy was low [16] and the sample size is too small with an accuracy below 60% [8]. Furthermore, other classifiers have also been used, such as K-NN [9],[17] and NB [18],[19]. However, the use of this classification method only focuses on determining the location, classifying images, and combining the K-NN [20] and NB methods to characterize numeral and total attributes [21]. Based on the brief description above, it can be seen that the use of classification methods to detect and provide detailed information on an object is very important. But with the low accuracy of results and a small number of samples, not much scientific information has been revealed. The application for detecting road surface types has not even been seen yet based on feature textures. Therefore, further research on this matter is needed to produce scientific information about the detection of three road types (asphalt, gravel, and pavement) based on feature texture using the GLCM, KNN and NB methods. II. Material and Methods A. Road Image Dataset Figure 1 and Figure 3 shows a sample image of each class from the dataset. The dataset was constructed from 90 road images with good illumination from google Street View and divided into 3 classes and each object is 30 pieces. Fig. 1. Sampling images from each class. Furthermore, we extract 50 x 50 part-images from the obtained images to create our set of data (see Figure 2). In total of data collection, there are 450 road surface images, 300 for practice and 150 for verifying. Fig. 2. Extraction of part-images from the road image ISSN: 2580-0817 Journal of Mechanical Engineering Science and Technology 42 Vol. 6, No. 1, July 2022, pp. 40-47 Marianingsih et al. (Machine Vision for the Various Road Surface Type Classification) (a) (b) (c) Fig. 3. The typical surface image B. GLCM Texture Features GLCM is an approach for extracting texture features from an image. GLCM identifies the relationship between 2 neighboring pixels. Each pixel has a gray level, distance, and angle attribute. This study uses a gray level range of 0-255, the distance in GLCM is calculated by the number of pixels between the reference pixel and neighboring pixels, and 8 angles can be used in GLCM, including angles 0°, 45°, 90°, 135°, 180°, 225°, 270°, or 315°. This study uses 4 angles, namely 0°, 45°, 90°, and 135°, because the GLCM value at an angle of 0° is equal to a value of 180°, as well as the values of 45°, 90°, and 135° are equal to 225°, 270°, and 315°. Furthermore, the GLCM technique begins with the creation of a matrix with sizes according to the range of gray levels. The second step is to create a co-occurrence matrix, which means filling the matrix with several pixel pairs for a certain brightness level, with a certain combination of distance and angle. The third step is to create a symmetric matrix by adding the co-occurrence matrix to the transpose matrix. Finally, normalize the value of the co-occurrence matrix by dividing each matrix element by the sum of all the matrix element values. Fig. 4. The example calculation of the GLCM matrix. The texture features of an image are obtained by calculating the second-order statistical features [22]. Furthermore, several attributes are used to shorten computation time i.e. Angular Second-Moment (ASM), entropy, contrast, and correlation. The Angular Second- Moment (ASM) feature is a measure of image homogeneity. The entropy feature is a measure of gray-level irregularities in the image. High value if GLCM elements have relatively equal value and low value if GLCM elements close to zero or one. C. Classification of Road Surface Types Image Furthermore, to classify the type of road surface, two methods are used, namely KNN and NB. KNN classifier is a supervised machine-learning algorithm that can be used to solve 43 Journal of Mechanical Engineering Science and Technology ISSN 2580-0817 Vol. 6, No. 1, July 2022, pp. 40-47 Marianingsih et al. (Machine Vision for the Various Road Surface Type Classification) classification and uses ‘similarity measure’ for an estimate and compare the identical values of the actual data with the data in the training set. However, KNN has four-step i.e. (1) load the data training and test data; (2) select the k value; (3) compute the distance between test data. Furthermore, every line of training data with the Euclidean distance is based on the distance value, and sort them in ascending order. (4) Choose the top k rows from the selected range and specify a class to the test point based on the majority class of these lines. On the other hand, the NB classifier is an approach method for achieving the best prediction and increasing accuracy levels. III. Results and Discussions This study uses achievement measurements of Confusion Matrix. The confusion matrix consists of four basic characteristics (value) that are used to define the measurement metrics of the classifier. These four values are recall, precision, F-measure, and accuracy [23]. An example of a confusion matrix is shown in Table 1. The number of negative examples classified accurately is denoted True Negative value and the number of positive examples classified accurately is denoted True Positive value. The number of actual negative examples classified as positive is False Positive value and the number of actual positive examples classified as negative is False Negative value. Moreover, this phenomenon is the same as the results of previous studies that discussed vehicle safety systems to minimize accidents [23]. Table 1. Confusion Matrix Predicted Negative Positive Actual Negative True negative False positive Positive False negative True Positive Percentage of related sides that are properly intelligible Recall. The predicted proportion of interrelated sheets thus precision occurs. F-Measure comes from precision and recall amount. The part of the total sum of the true forecast is called Accuracy. The equation we are considering is: 𝑅𝑒𝑐𝑎𝑙𝑙 = 𝑇𝑟𝑢𝑒 𝑃𝑜𝑠𝑖𝑡𝑖𝑣𝑒 𝐹𝑎𝑙𝑠𝑒 𝑁𝑒𝑔𝑎𝑡𝑖𝑣𝑒+𝑇𝑟𝑢𝑒 𝑃𝑜𝑠𝑖𝑡𝑖𝑣𝑒 ............................................................................. (1) 𝑃𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛 = 𝑇𝑟𝑢𝑒 𝑃𝑜𝑠𝑖𝑡𝑖𝑣𝑒 𝐹𝑎𝑙𝑠𝑒 𝑃𝑜𝑠𝑖𝑡𝑖𝑣𝑒 + 𝑇𝑟𝑢𝑒 𝑃𝑜𝑠𝑖𝑡𝑖𝑣𝑒 ................................................................................ (2) 𝐹 − 𝑀𝑒𝑎𝑠𝑢𝑟𝑒 = 2 𝑥 𝑅𝑒𝑐𝑎𝑙𝑙 𝑥 𝑃𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛 𝑅𝑒𝑐𝑎𝑙𝑙 + 𝑃𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛 ................................................................................. (3) 𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 = 𝑇𝑟𝑢𝑒 𝑁𝑒𝑔𝑎𝑡𝑖𝑣𝑒+𝐹𝑎𝑙𝑠𝑒 𝑃𝑜𝑠𝑖𝑡𝑖𝑣𝑒 𝑇𝑟𝑢𝑒 𝑁𝑒𝑔𝑎𝑡𝑖𝑣𝑒 + 𝐹𝑎𝑙𝑠𝑒 𝑁𝑒𝑔𝑎𝑡𝑖𝑣𝑒+𝑇𝑟𝑢𝑒 𝑃𝑜𝑠𝑖𝑡𝑖𝑣𝑒+𝐹𝑎𝑙𝑠𝑒 𝑃𝑜𝑠𝑖𝑡𝑖𝑣𝑒 ................................... (4) The road image dataset (see Figure 3) consists of 450 images which are divided into 300 training images (100 objects per grade) and a test set of 150 (50 objects per grade). The GLCM program is well-used to select texture features (entropy, contrast, correlation, and ASM) from each image data. ISSN: 2580-0817 Journal of Mechanical Engineering Science and Technology 44 Vol. 6, No. 1, July 2022, pp. 40-47 Marianingsih et al. (Machine Vision for the Various Road Surface Type Classification) The features of each image are stored in a vector, and these features are inputs for the KNN Classifier and Naïve Bayes Classifier. Figure 5 showed the system of the block diagram. Furthermore, the early examination is to complete the value of k with the top categorization, and the output on several k values (see Fig. 6). Moreover, the results show that the k=2 gives the best performance with an accuracy of about 78%. We use this accuracy to compare Naïve Bayes Classifiers. This phenomenon indicates that the classification method used gives positive results and has the opportunity to be used in identifying road surfaces. This analysis is very possible because it is in alignment with the results of previous studies [19]. Furthermore, with an accuracy above 70%, it has the potential to improve automatic driving technologies, which are beneficial in reducing congestion and accidents. This finding is very important because it provides additional scientific information on the impact received by a car or driver as a result of vibrations [24], [25] caused by changes in the road surface. Figure 5. Block diagram of the system Figure 6. Effect of k value on experiment classification accuracy The confusion matrix and the performance measurements result for KNN Classifier with k value is 2 (see Table 2), and Naïve Bayes Classifier are shown in Table 3. The result indicates that the KNN Classifier have the best performance on Asphalt Class and Naïve Bayes Classifier has the best performance on Gravel Class. In final, the KNN classifier produces better accuracy around 78% than the Naïve Bayes classifier around 72%. This result is following previous research regarding road surface conditions, which were discussed with a different perspective [6], [23], where they stated that the KNN classifier is 62 64 66 68 70 72 74 76 78 80 0 2 4 6 8 10 12 14 16 18 20 22 A c c u ra c y ( % ) Value (k) The highest acuration The lowest acuration 45 Journal of Mechanical Engineering Science and Technology ISSN 2580-0817 Vol. 6, No. 1, July 2022, pp. 40-47 Marianingsih et al. (Machine Vision for the Various Road Surface Type Classification) more recommended for detecting slippery road surface conditions during the rainy season because it is able to produce a better level of accuracy than the Naïve Bayes classifier. Furthermore, we tested the classification ten times with the same type of data (training and test). The computational time of the KNN classifier and the Naive Bayes Classifier, tend to be the same, and the average computation time of KNN is 3.459 seconds and Naive Bayes is 3.464 seconds with a standard deviation of about 1.4. Table 2. Confusion matrix of KNN Classifier with value of k = 2 Class Asphalt Gravel Pavement KNN NB KNN NB KNN NB Asphalt 49 38 1 8 0 4 Gravel 8 3 36 46 6 1 Pavement 5 2 13 39 32 9 Table 3. Performance Measurements Result of KNN Classifier with the value of k = 2 Class Precision Recall f1-score KNN NB KNN NB KNN NB Asphalt 0.79 0.88 0.98 0.76 0.88 0.82 Gravel 0.72 0.49 0.72 0.92 0,72 0.64 Pavement 0,84 0.64 0.64 0,18 0.73 0.28 IV. Conclusions In this study, we proposed the feature from texture information and used KNN Classifier to classify road surface types such as asphalt, gravel, and pavement. The KNN Classifier with a k value is 2 has the best performance with an accuracy of 78% and the gravel class has the highest accuracy. As a comparison, the same dataset is classified using Naive Bayes Classifier and yields an accuracy of 72% and the gravel class has the highest accuracy. The paving class has the smallest accuracy in both classifier methods. The two classifiers have nearly the same computing time, 3.459 seconds for the KNN Classifier and 3.464 seconds for the Naive Bayes Classifier. Furthermore, to get comprehensive scientific information about road surface detection and classification, thus future studies can use features of colour and texture for improved accuracy results up to 80% and even 90%, and besides that, they can use or compare with other classifiers. Acknowledgment The authors are very grateful to the Faculty of Computer Science and Management of Jayapura University of Science and Technology through the Computer Vision Research Group, which has supported the funding of research and equipment so that this research can be completed properly. References [1] G. Mao, C. Zhang, K. Shi, W. Ping, “Prediction of the performance and exhaust emissions of ethanol-diesel engine using different neural network,” Energy Sources, ISSN: 2580-0817 Journal of Mechanical Engineering Science and Technology 46 Vol. 6, No. 1, July 2022, pp. 40-47 Marianingsih et al. (Machine Vision for the Various Road Surface Type Classification) Part A: Recovery, Utilization, and Environmental Effects. pp. 1-15, August 2019, https://doi.org/10.1080/15567036.2019.1656307. [2] AK. Tahkur, RS. Kaviti, A. Gehlot, “Modelling the performance and emissions of ethanol-gasoline blend on a gasoline engine using ANFIS,” International Journal of Ambient Energy, January 2021, https://doi.org/10.1080/01430750.2021.1873856. [3] H. Y. Nanlohy, I. N. G. Wardana, N. Hamidi, L. Yuliati, and T. Ueda, “The effect of Rh3+ catalyst on the combustion characteristics of crude vegetable oil droplets,” Fuel, vol. 220, pp. 220–232, May 2018, doi: 10.1016/J.FUEL.2018.02.001. [4] H. Y. Nanlohy, I. N. G. Wardana, M. Yamaguchi, and T. Ueda, “The role of rhodium sulfate on the bond angles of triglyceride molecules and their effect on the combustion characteristics of crude jatropha oil droplets,” Fuel, vol. 279, February 2020, doi: 10.1016/j.fuel.2020.118373. [5] H. Y. Nanlohy, “Performance and Emissions Analysis of BE85-Gasoline Blends on Spark Ignition Engine,” Automot. Exp., vol. 5, no. 1, pp. 40–48, 2022, doi: https://doi.org/10316/ae.6116. [6] D. Arya., “Deep learning-based road damage detection and classification for multiple countries,” Autom. Constr., vol. 132, p. 103935, 2021, doi: 10.1016/j.autcon.2021.103935. [7] J. Menegazzo, A. von Wangenheim, “Road surface type classification based on inertial sensors and machine learning,” Computing, vol. 103, pp. 2143–2170, 2021, doi: https://doi.org/10.1007/s00607-021-00914-0. [8] T. Beilfuss, K. P. Kortmann, M. Wielitzka, C. Hansen, and T. Ortmaier, “Real-Time Classification of Road Type and Condition in Passenger Vehicles,” IFAC- PapersOnLine, vol. 53, no. 2, pp. 14254–14260, 2020, doi: 10.1016/j.ifacol.2020.12.1161. [9] S. Marianingsih, F. Utaminingrum, and F. A. Bachtiar, “Road surface types classification using combination of K-nearest neighbor and Naïve Bayes based on GLCM,” Int. J. Adv. Soft Comput. its Appl., vol. 11, no. 2, pp. 15–27, 2019. [10] D. Fink, A. Busch, M. Wielitzka, and T. Ortmaier, “Resource efficient classification of road conditions through CNN pruning,” IFAC-PapersOnLine, vol. 53, no. 2, pp. 13958–13963, 2020, doi: 10.1016/j.ifacol.2020.12.913. [11] F Utaminingrum, TA Kurniawan, MA Fauzi, R Maulana, D Syauqy,. “A laser-vision based obstacle detection and distance estimation for smart wheelchair navigation,” 2016 IEEE Int. Conf. Signal Image Process. ICSIP 2016, pp. 123–127, 2017, doi: 10.1109/SIPROCESS.2016.7888236. [12] M. D’Apuzzo, A. Evangelisti, and V. Nicolosi, “An exploratory step for a general unified approach to labelling of road surface and tyre wet friction,” Accid. Anal. Prev., vol. 138, p. 105462, 2020, doi: 10.1016/j.aap.2020.105462. [13] A. Septiarini, A. Sunyoto, H. Hamdani, A. A. Kasim, F. Utaminingrum, and H. R. Hatta, “Machine vision for the maturity classification of oil palm fresh fruit bunches based on color and texture features,” Sci. Hortic. (Amsterdam)., vol. 286, p. 110245, 2021, doi: 10.1016/j.scienta.2021.110245. [14] M. N. Khan and M. M. Ahmed, “Weather and surface condition detection based on https://doi.org/10.1080/15567036.2019.1656307 https://doi.org/10.1080/01430750.2021.1873856 47 Journal of Mechanical Engineering Science and Technology ISSN 2580-0817 Vol. 6, No. 1, July 2022, pp. 40-47 Marianingsih et al. (Machine Vision for the Various Road Surface Type Classification) road-side webcams: Application of pre-trained Convolutional Neural Network,” Int. J. Transp. Sci. Technol., pp. 1–16, 2021, doi: 10.1016/j.ijtst.2021.06.003. [15] C. Gorges, K. Öztürk, and R. Liebich, “Impact detection using a machine learning approach and experimental road roughness classification,” Mech. Syst. Signal Process., vol. 117, pp. 738–756, 2019, doi: 10.1016/j.ymssp.2018.07.043. [16] A. Gholamhosseinian and J. Seitz, “Safety-Centric Vehicle Classification Using Vehicular Networks,” Procedia Comput. Sci., vol. 191, pp. 238–245, 2021, doi: 10.1016/j.procs.2021.07.030. [17] B. Behera and R. Sikka, “Deep learning for observation of road surfaces and identification of path holes,” Mater. Today Proc., 2021, doi: 10.1016/j.matpr.2021.03.197. [18] S. Marianingsih and F. Utaminingrum, “Comparison of Support Vector Machine Classifier and Naïve Bayes Classifier on Road Surface Type Classification,” 3rd Int. Conf. Sustain. Inf. Eng. Technol. SIET 2018 - Proc., pp. 48–53, 2018, doi: 10.1109/SIET.2018.8693113. [19] L. Cheng, X. Zhang, and J. Shen, “Road surface condition classification using deep learning,” J. Vis. Commun. Image Represent., vol. 64, p. 102638, 2019, doi: 10.1016/j.jvcir.2019.102638. [20] M. A. Agebure, E. O. Oyetunji, and E. Y. Baagyere, “A three-tier road condition classification system using a spiking neural network model,” J. King Saud Univ. - Comput. Inf. Sci., vol. 34, no. 5, pp. 1718-1729, 2022, doi: 10.1016/j.jksuci.2020.08.012. [21] S. Shim, J. Kim, S. W. Lee, and G. C. Cho, “Road surface damage detection based on hierarchical architecture using lightweight auto-encoder network,” Autom. Constr., vol. 130, p. 103833, 2021, doi: 10.1016/j.autcon.2021.103833. [22] S. Sattar, S. Li, and M. Chapman, “Developing a near real-time road surface anomaly detection approach for road surface monitoring,” Meas. J. Int. Meas. Confed., vol. 185, p. 109990, 2021, doi: 10.1016/j.measurement.2021.109990. [23] S. Kim, J. Lee, and T. Yoon, “Road surface conditions forecasting in rainy weather using artificial neural networks,” Saf. Sci., vol. 140, p. 105302, 2021, doi: 10.1016/j.ssci.2021.105302. [24] J. Li, T. Liu, X. Wang, J. Yu, “Automated asphalt pavement damage rate detection based on optimized GA-CNN,” Automation in Construction, vol. 136, April 2022, 104180. [25] G. Doğan, B. Ergen, “A new mobile convolutional neural network-based approach for pixel-wise road surface crack detection,” Measurement, vol. 195, May 2022, 111119. https://www.sciencedirect.com/science/article/abs/pii/S0263224122003815#!