international journal of applied sciences and smart technologies international journal of applied sciences and smart technologies volume 1, issue 2, pages 121–128 p-issn 2655-8564, e-issn 2685-9432 121 saving the moving position on the continuous passive motion machine for rehabilitation of shoulder joints antonius hendro noviyanto politeknik mekatronika sanata dharma, yogyakarta, indonesia corresponding author: hendro@pmsd.ac.id (received 03-06-2019; revised 29-10-2019; accepted 29-10-2019) abstract this paper presents the results of the motion therapy device continuous passive motion (cpm) machine which is applied to the shoulder joint with storage movement. the process of joint rehabilitation is carried out by continuous passive movements. this movement is intended not to overload the work of the muscles and there is no stiffness in the joints after surgery or stroke patients or patients who have carried out immobilization for quite a long time. the cpm machine developed can move flexion and horizontal abduction. the position storage in this tool is carried out in a range of movements in flexion and horizontal abduction. with the storage of movement can be done movement/therapy exercises in patients with joint stiffness can be done passively and continuously. keywords: cpm machine, joint rehabilitation, flexion, horizontal abduction 1 introduction post-joint surgery patients or patients who are injured in the joints are required to do joint movement exercises. joint movement exercises can be performed using therapy equipment continuous passive motion machine (cpm machine) [1]. basically the international journal of applied sciences and smart technologies volume 1, issue 2, pages 121–128 p-issn 2655-8564, e-issn 2685-9432 122 working principle of this tool is to move passively and continuously as needed, so that the joints in patients do not experience stiffness [2]. joint stiffness can be caused by prolonged immobilization of the shoulder joint due to post joint surgery. in addition, joint stiffness can also be caused by patients who are reluctant to move the joint due to the pain felt by the patient. based on the results of the interview with dr. hermawan nagar rasyid, dr., spot., m.t., ph.d., fics., therapy for joints of the shoulder can be done with flexion and extension as well as horizontal motion adduction and horizontal abduction. joint stiffness can be prevented by providing movement exercises that can restore rom from the shoulder joint [3]. the movement can be given to patients with passive and repeated joint stiffness. passive movement is a movement that does not require muscle work. movement exercises are carried out 3-5 times a day, where each exercise is carried out for 1 hour. movements that can be given include [4]: 1. flexion / extension: to reduce tissue adhesions 2. adduction / abduction: to increase the range of motion of the shoulder 3. rotational motion: to increase the range of motion of the shoulder giving training is done by giving the angle of movement gradually with the recommended speed is 1 rotation in 45 seconds [4]. long immobilization can lead to joint stiffness. joint stiffness can be reduced by moving the joints. in patients after joint surgery it is recommended to immediately carry out joint movement exercises, so as to reduce the risk of joint stiffness. cpm machine is a tool that has a method of rehabilitation of damaged joints [5]. movement exercises using cpm machines can stimulate healing and regeneration of joint cartilage and prevent joint stiffness. the use of cpm machines has the following benefits [5]: 1. improve nutrition and metabolic activity on the surface of cartilage. 2. stimulates mesenchymal cells to differentiate inside the surface of the cartilage. 3. accelerate healing of cartilage and periarticular tissue, such as tendons and ligaments. international journal of applied sciences and smart technologies volume 1, issue 2, pages 121–128 p-issn 2655-8564, e-issn 2685-9432 123 the use of cpm machine that is in accordance with the procedure, is expected to restore the scope of motion of the joints for sufferers of joint trauma/patients post joint surgery [6]. based on the need for therapeutic tools for the shoulder joint, then in this study a storage test of movement on the cpm machine therapeutic apparatus was made. so that the therapeutic apparatus can be used repeatedly with the same movement mode according to the storage of movements that have been carried out. 2 design cpm machine that will be designed is a system based on a microcontroller. the system in this tool consists of a microcontroller, dc motor controller, and rotary encoder. the overall system block diagram is shown in figure 1. figure 1. diagram block system [7] the therapeutic apparatus that is made can do flexion and horizontal abduction. the flexion movement angle is and the movement angle from horizontal abduction is . besides the movement angle, on this tool people can also set the rotational speed of the tool, namely: 1 rpm, 2 rpm, and 3 rpm. the rotating speed in the tool is controlled using pid control [7]. as in figure 2, the tool has 3 working modes, namely: international journal of applied sciences and smart technologies volume 1, issue 2, pages 121–128 p-issn 2655-8564, e-issn 2685-9432 124 1. mode i : flexion 2. mode ii : horizontal abduction 3. mode iii : save position figure 2. tool movement mode [7] in mode i the tool will move flexibly with a movement angle of . in mode ii the tool will move horizontally abduction with a movement angle of [7]. while in mode iii the tool moves according to the position that has been saved. the position stored can change according to the needs with a range of flexed and horizontal abduction movements. the storage system on the device utilizes the memory facilities available on the microcontroller. data is stored in the form of data on movement angle and desired type of movement (flexion/abduction). 3 method the method used in this study is by conducting experiments on tools to get the test results. the results obtained are the results of tool movement angle data and the results of movement based on position storage. international journal of applied sciences and smart technologies volume 1, issue 2, pages 121–128 p-issn 2655-8564, e-issn 2685-9432 125 in testing the movement angle of the tool, the method that is carried out is by measuring the movement steps of the tool by using a protractor. the results of these measurements will then be compared with the desired requirements. testing the movement in accordance with storage is done by comparing the results of the movement stored in memory from the first cycle to the next cycle. 4 testing and discussion testing on this tool is done by testing the movement angle and rotational speed of the rotating tool. 4.1 testing and discussion of moving angles in flexion movements the measured angles are and , while the horizontal movement of abduction measured angles are and . the results of flexion angle test can be seen in figure 3.a., while the results of horizontal angle of abduction test can be seen in figure 3.b. (a) (b) figure 3. testing of flexional motion angle: (a) motion with angle (b) motion with angle international journal of applied sciences and smart technologies volume 1, issue 2, pages 121–128 p-issn 2655-8564, e-issn 2685-9432 126 (a) (b) figure 4. testing of horizontal motion angles: (a) motion with angles 1 (b) motion with angles the graph in figure 4 shows the results of motor movement that approaches a straight line. the angle of movement and speed of the tool has a result that is not much different from the settings in the tool. the table of comparison of the results of the movement of the tools and settings in the tool can be seen in table 1. table 1. moving angle test results no. tool movement desired angle angle measurement 1 flexion 2 3 horizontal abduction 4 4.2 movement testing and discussion in accordance with position storage the arm support that is stored in this position is flexion, abduction, and flexion. graphs of movement results with position storage can be seen in figure 5. international journal of applied sciences and smart technologies volume 1, issue 2, pages 121–128 p-issn 2655-8564, e-issn 2685-9432 127 figure 5. motion chart with position storage the position stored in the movement is the movement with the trajectory of flexion and abduction. numbers 1, 2, and 3 in the graph show the movement of the arm support. in accordance with the storage of movements that have been carried out, number 1 is a graph of flexion movements. number 2 in the figure shows a graph of horizontal abduction. while number 3 in the picture shows a graph of flexion movements. the next three movements are movements to return to the starting position. number 4 is a graph of flexion movements. number 5 in the figure shows horizontal abduction, where as number 6 from the figure shows flexion. in accordance with the resulting graph, the movement with position storage requires 6 movements for one cycle. based on the graph shown, there are similarities in the resulting movement. movements 1-6 of the first cycle have similarities in movement with movements 1-6 of the second cycle. 5 conclusion based on the results of testing that has been done, the tool can work as needed. data storage can be carried out and can be implemented on the device. based on the data international journal of applied sciences and smart technologies volume 1, issue 2, pages 121–128 p-issn 2655-8564, e-issn 2685-9432 128 from the movement of the storage, the tool can store movement in accordance with what is needed and can work according to the movements that have been stored. acknowledgements the author thank dr. hermawan nagar rasyid, spot., m.t., ph.d., fics who provided information about the need for cpm machine therapy devices. references [1] s. miyaguchi, n. matsunaga, k. nojiri, and s. kawaji, “impedance control of cpm device with flex-/extension and pro-/supination of upper limbs,” ieee/asme international conference on advanced intelligent mechatronics, 2007. [2] s. miyaguchi, n. matsunaga, k. nojiri, and s. kawaji, “on effective movement in cpm for shoulder joint,” ieee international conference on systems, man and cybernetics, 2008. [3] j. hamill and k. m. knutzen, biomechanical basis of human movement, 3rd edition, lippincott williams & wilkins, usa, 2009. [4] mujianto, cara cepat mengatasi 10 besar kasus muskuloskeletal dalam praktik klinik fisioterapi, cv. trans info media, 2013. [5] r. b. salter, continuous passive motion (cpm): textbook of disorders and injuries of the musculoskeletal system, lippincott williams & wilkins, usa, 1999. [6] s. w. o'driscoll and n. j giori, “continuous passive motion (cpm): theory and principles of clinical application,” journal of rehabilitation research and development, 37 (2), 179 188, 2000. [7] a. h. noviyanto and m. richard, “development of therapy equipment for continuous passive motion machine shoulder joints: track motion control,” isiet innovation and technology in education for 21st century supporting thailand 4.0, 2017. international journal of applied sciences and smart technologies international journal of applied sciences and smart technologies volume 1, issue 1, pages 45–50 issn 2655-8564 45 development study of deep learning facial age estimation puspaningtyas sanjoyo adi department of informatics, faculty of science and technology, sanata dharma university, yogyakarta, indonesia corresponding author: puspa@usd.ac.id (received 28-05-2019; revised 29-05-2019; accepted 31-05-2019) abstract human age estimation is one of the most challenging problem because it can be used in many applications relating to age such as age-specific movies, age-specific computer applications or website, etc. this paper will contribute to give brief information about development of age estimation researches using deep learning. we explore three recent journal papers that give significant contribution in age estimation using deep learning. from these papers, they selected classification methods and there is gradual improvement in result and also in selected loss function. the best result gives mae (mean average error) 2.8 years and vgg-16 is the most selected cnn architecture. keywords: age estimation, facial analysis 1 introduction human age can be estimated by facial appearance. our faces show a special pattern in every lifetime so that our faces will have a huge difference every lifetime such as in childhood and adulthood. for the same person, the photo taken at different years indicate the aging process on their faces. the longer the interval is, the more obvious international journal of applied sciences and smart technologies volume 1, issue 1, pages 45–50 issn 2655-8564 46 changes there will be. facial age estimation has potential application such as age specific movies, age-specific products vending machine like tobacco, alcohol, and other age-specific computer applications or websites. estimating age from images is one of the most challenging work in facial analysis. it is hard to accurately predict human age because human facial aging is a slow and complicated process effected by many factors. with rapid advances in computer vision and pattern recognition, this problem becomes an interesting topic. a typical pipeline of the existing methods for age estimation usually consists of two modules: age image representation and age estimation techniques [1]. recently, deep learning schemes, especially convolutional neural networks (cnns), have been successfully employed for many tasks related to facial analysis. this paper aims to provide a brief description about some papers that have done age estimation research using cnn or deep learning. we will limit discussion to only a few paper published in journals or conferences in the last 5 years and became an important milestone of age estimating work. this paper is organized as follows: in section 2, age estimation algorithm will be explained and in section 3, we will explain about cnn architecture. 2 age estimation algorithm there has been a significant volume of research done for age estimates. this paper will focus on some papers that contributed significant development. we will explain these researches together with the estimation algorithm used. for age estimation, there are three methods that have been worked on, namely, classification, regression and ranking. in classification method, human age is assumed to be classified according to age-groups. the weakness of classification method is the sharing of important information between adjacent age groups. this is addressed by regression methods which appear to perform better. a different approach to deal with this challenge is to adopt ranking methods. international journal of applied sciences and smart technologies volume 1, issue 1, pages 45–50 issn 2655-8564 47 figure 1. pipeline of dex method [2] we choose rothe’s work [2] as first paper examined and the winner of the lap 2015 challenge [3] on apparent age estimation. age estimation done by rothe is a classification method. they use vgg-16 [4] as base cnn architecture called dex (deep expectation). fig. 1 shows pipeline of dex method. system will get face image and then, it will be classified using cnn into 101 classes. these classes describe possible age groups from face image samples. they train cnn for classification and at test time, they compute expecting value over the softmax-normalized output probabilities of |𝑌| neurons. 𝐸(𝑂) = ∑ 𝑦𝑖 𝑜𝑖 |𝑌| 𝑖=1 , (1) where 𝑂 = {1, 2, . . . , |𝑌|} is the |𝑌|-dimensional output layer and 𝑂𝑖 ∈ 𝑂 is the softmaxnormalized output probability of neuron 𝑖. their research result a mae (mean average error) 3.09 years with using imdb-wiki [2] as training dataset and fg-net as testing dataset [5]. the same research was also conducted by antipov [6]. they also use vgg-16 as base cnn architecture. they did the research with 3 kind age encoding, fig. 2,: (1) pure year classification, called 0/1 classification age encoding (0/1 cae), (2) pure regression, called real value age encoding (rvae), (3) soft classification, called label distribution age encoding (ldae). each encoding has its loss function but ldae gives the best result. 𝐿𝐿𝐷𝐴𝐸 = − 1 𝑁 ∑ ∑(𝑡𝑖 𝑘 𝑙𝑜𝑔 𝑝𝑖 𝑘 + (1 − 𝑡𝑖 𝑘) 𝑙𝑜𝑔(1 − 𝑝𝑖 𝑘)) 100 𝑖=1 𝑁 𝑘=1 (2) where 𝑁 denotes number of images in batch, 𝑘 denotes number of age class, 𝑡 denotes targets, 𝑝 denotes to prediction. loss function refer to gaussian distribution. international journal of applied sciences and smart technologies volume 1, issue 1, pages 45–50 issn 2655-8564 48 lost function become differentiator between rothe [2] and antipov [6], but they are still in classification method. antipov research result mae 2.84 years using fg-net as testing dataset. figure 2. example of encoding [6]. 𝑡 denotes encoding result and 𝜎 is a hyper parameter of ldae. two papers before showed evolution of classification methods especially in loss function development. hu et al [7] made improvement with adding age difference estimator. this component is built to overcome limitation of ground-truth age label dataset. making ground-truth age label dataset is a costly effort so hu et al propose a new dataset consisted of pair face images of same person with different taken time. this new dataset is used to make age difference loss function. figure 3. an overview of proposed method by hu [7] international journal of applied sciences and smart technologies volume 1, issue 1, pages 45–50 issn 2655-8564 49 fig. 3 shows an overview of proposed method by hu. the system will give 2 outputs: age estimation and age difference. after training phase, cnn architecture will have values that will be tested with testing dataset. initial probability distribution of age classes is set to gaussian distribution. the age difference information with three kinds of loss functions, i.e. entropy loss, cross entropy loss and kullback-leibler (k-l) divergence distance. these loss functions can not only force the probability distribution of age classes to have one single peak value but also make the probability distribution locate within the correct range. this research result mae 2.8 years using fg-net as testing dataset. 3 discussions from three recently age estimation researches [2], [6], [7], we know that cnn architecture give good result in age estimation. estimation method using classification approach gives good result. the challenge in cnn architecture is to find the best loss function which gaussian distribution is the most choice used by researchers. based on three papers above, cnn architecture giving best prediction is vgg-16. this architecture is basically designed for face recognition but based on these papers, this architecture give good result for age estimation. the other challenge is to find effective and efficient training dataset. from 3 papers, 2 papers [2], [7] contribute new dataset that is used by another similar paper for its training. imdb-wiki dataset [2] is not only used by [2] but it is also used by [6] for training phase. the big challenge is to make age label automatically. further implication is hard to label age for its facial image. 4 conclusions in this paper, we have reviewed a few milestone papers in age estimation using deep learning. all papers result significant improvement in age estimation. significant development component for these problem solving are loss function and dataset. from the recent researches, the task still open for improvement especially using deep learning. in the future, this problem still gives challenging because aging process is a complex process influenced by many internal and external factor such as gene, environment, etc. international journal of applied sciences and smart technologies volume 1, issue 1, pages 45–50 issn 2655-8564 50 acknowledgements authors wishing to acknowledge assistance or encouragement from colleagues, special work by technical staff or financial support from organizations should do so in an unnumbered. acknowledgments section immediately following the last numbered section of the paper. references [1] y. fu, g. guo, and t. s. huang, “age synthesis and estimation via faces: a survey,” ieee transactions on pattern analysis and machine intelligence, 32 (11), 1955–1976, 2010. [2] r. rothe, r. timofte, and l. v. gool, “deep expectation of real and apparent age from a single image without facial landmarks,” international journal of computer vision, 126 (2-4), 144–157, 2018. [3] s. escalera, j. gonz�̀�lez, x. bar�́�, p. pardo, j. fabian, m. oliu, h. j. escalante, i. huerta, and i. guyon, “chalearn looking at people 2015: apparent age and cultural event recognition datasets and results,” proceedings of the ieee international conference on computer vision, 243–251, 2015. [4] k. simonyan and a. zisserman, “very deep convolutional networks for large-scale image recognition,” arxiv, 1–10, 2014. [5] g. panis, a. lanitis, n. tsapatsoulis, and t. f. cootes, “overview of research on facial ageing using the fg-net ageing database,” iet biometrics, 5 (2), 37–46, 2015. [6] g. antipov, m. baccouche, s. a. berrani, and j. l. dugelay, “effective training of convolutional neural networks for face-based gender and age prediction,” pattern recognition, 72, 15–26, 2017. [7] z. hu, y. wen, j. wang, m. wang, r. hong, and s. yan, “facial age estimation with age difference,” ieee transactions on image process, 26 (7), 3087–3097, 2017. international journal of applied sciences and smart technologies international journal of applied sciences and smart technologies volume 1, issue 2, pages 113–120 p-issn 2655-8564, e-issn 2685-9432 113 development of stamping machine module to improve practical competency pippie arbiyanti department of mechatronics, politeknik mekatronika sanata dharma, yogyakarta, indonesia corresponding author: pipie@pmsd.ac.id (received 20-05-2019; revised 17-10-2019; accepted 17-10-2019) abstract the aim of this research is to design and manufacture a stamping machine module controlled by plc. stamping machine is an application of electropneumatic system in industry to print image/text on the workpieces. the result of this research is a stamping module controlled by plc, with reliable and easy to disassemble for learning electro-pneumatic practice. keywords: stamping, electro-pneumatic system, automation, plc 1 introduction entering the era of industrial revolution 4.0, more industries are implementing automation in their production processes. thus, the need for workers who have expertise in accordance with the latest technological developments is increasing. vocational education in engineering subject as a provider of skilled workers needs to equip students with skills that are in line with the latest technological developments. one of the important components in industrial automation is the electro-pneumatic system. this electro-pneumatic system is widely used in the production process because it uses a cheap, clean and effective source of wind power. knowledge and skills about electro-pneumatic is a mandatory skill that must be possessed by a technician in the industry. the competencies needed from this electro-pneumatic system are international journal of applied sciences and smart technologies volume 1, issue 2, pages 113–120 p-issn 2655-8564, e-issn 2685-9432 114 understanding of electrical and pneumatic components, functions, working principles, and how to control them; as well as assembling skills, troubleshooting, and commissioning of an electro-pneumatic system. in the field of education, the delivery of electro-pneumatic material is provided using practical teaching aids. the disadvantage of this props is that the types of components provided are limited, so that what can be done is the demonstration/simulation of the motion of an electro-pneumatic system. thus students have not yet gotten a real picture of an electro-pneumatic system as in the industry. then we need examples of production processes that are close to reality with the industrial world, one of which is the stamping process [1]. the process of stamping, is the process of printing images / text on workpieces automatically. some manufacturers such as festo and smc have offered practice modules that are examples of automation processes in the industry, but at very high prices [2, 3]. singh [4] has also developed automatic press machines using pneumatic for large scale. this paper offers the design and manufacturing of stamping machine modules with plc control. this module will be used as an electro-pneumaticpractice prop. this module can be disassembled with the appropriate equipment, to meet the competencies of students in assembling skills, troubleshooting, and programming in an electropneumatic system. in addition, the cost of making modules in this paper is much cheaper than existing modules because they are self-developed and use used components that can still function properly. the paper is organized as follows: section 2 presents the module design while the method (steps) of making the module will be described in section 3. the results of the research and discussion are written in section 4. this paper concludes with some conclusions. 2 design in this section, the writer explains the design of the proposed tool, which is stamping process and mechanical design. international journal of applied sciences and smart technologies volume 1, issue 2, pages 113–120 p-issn 2655-8564, e-issn 2685-9432 115 2.1. stamping process design stamping machine module used to stamp double-sided workpieces using tampon stamping method. this machine involves a sequence of operations illustrated in figure 1 below. sliding stamping 1st side sliding clamping turningsliding stamping 2nd side unclamping workpiece stencil stencil figure 1. stamping process stamping process starts by putting the workpiece on the conveyor belt, which will carry the workpiece to the stamping station. a plunger tampon will stamp the workpiece. the workpiece, which has already been stamped on one side, moves between the gripper jaws, and then closes. the workpiece is lifted, reversed, then lowered again to the conveyor. the stamping process will be repeated at the second station to stamp another side. after the two sides are printed, the workpiece will run towards the storage box. 2.2. mechanical design this stamping module is designed to be easily disassembled, to meet desired practice competencies. mechanical module design is presented in figure 2. international journal of applied sciences and smart technologies volume 1, issue 2, pages 113–120 p-issn 2655-8564, e-issn 2685-9432 116 base conveyor stamping station figure 2. mechanical design of stamping machine the base is to put the control components, namely power supply, solenoid valve, and plc. the upper part is to put a stamping machine that consists of a conveyor with timing belt and a pneumatic system to meet the stamping process. the module should be able to be dismantled and assembled with the appropriate tools. 3 research methodology this section is devoted for the research method. stamping machine module is done by pneumatic system with plc-based controller. 3.1. pneumatic system the actuator used to carry out the stamping process is: a) sliding is done by a dc motor, which is actuating belt-conveyor. b) two tampon plungers actuated by a rodless cylinder. c) a parallel gripper is used to clamping the workpiece. d) a semi-rotary drive is used to hold the gripper. e) two lifting cylinder used to move-up and down the semi-rotary unit and tampon plunger. international journal of applied sciences and smart technologies volume 1, issue 2, pages 113–120 p-issn 2655-8564, e-issn 2685-9432 117 the pneumatic stamping machine circuit is shown in figure 3 below. drawing and simulation pneumatic circuit are carried out using the fluid sim-p software [5]. figure 3. pneumatic circuit 3.2. plc-based controller this stamping machine is controlled by plc omron. the various inputs and outputs are shown in table 1. table 1. plc inputs/outputs digital inputs digital outputs start motor m1 stop solenoid 1y1 reset solenoid 1y2 manual/auto m/a solenoid 2y1 emergency stop solenoid 3y1 limit switch 1ls1 solenoid 4y1 limit switch 1ls2 solenoid 5y1 limit switch 4ls1 m/a indicator limit switch 4ls2 limit switches 1ls1 and 1ls2 were placed to know the position of rodless cylinder, which is holding the tampon plunger. limit switches 4ls1 and 4ls2 were placed to know the position of semi-rotary drive/swivel. control action was taken according to 4 2 5 1 3 2y 1 4 2 5 1 3 3y 1 4 2 5 1 3 5y 1 4 2 5 1 3 1y 1 1y 2 4ls1 4ls25 0 % 5 0 % 1ls1 1ls2 4 2 5 1 3 4y 1 2 a 3 a 5 a 1 a 4 a rodless cylinder lifting cylinder lifting cylinder swivel gripper international journal of applied sciences and smart technologies volume 1, issue 2, pages 113–120 p-issn 2655-8564, e-issn 2685-9432 118 the various inputs from the switches. the system can run at manual or automate mode by selecting m/a switch. the sequential diagram of the plc program is shown in figure 4. a ladder logic program for this plc was written in cx-programmer [6]. start 1ls1 4ls1 1ls2 t1 t2 1ls1 t3 t4 t5 t6... ...t6 t7 t8 4ls2 t9 t10 t11 t12 1y1+ 2y1+ t1 2y1t2 1y11y2+ 2y1+ t3 2y1t4 m+ t5 mt6 5y1+ t7 3y1+ t8 4y1+ 3y1t9 5y1t10 m+ t11 mt12 reset figure 4. sequential diagram 4 results and discussions. the results of this research is a stamping machine module as presented in figure 5 below. figure 5. stamping machine module international journal of applied sciences and smart technologies volume 1, issue 2, pages 113–120 p-issn 2655-8564, e-issn 2685-9432 119 testing the first stage of the stamping machine module is done by running the system in the order of the planned process. the test results show that the module can work well according to the plan. in figure 6, the second stage of testing is carried out by students by using it in lectures on electro-pneumatic practice at the mechatronics department, politeknik mekatronika sanata dharma (pmsd). in this practice, students conduct assembling, troubleshooting, and programming stamping machine module based on the procedures in the manual book. the test results show that students can perform procedures based on the manual book to be able to run the stamping machine process properly. figure 6. electro-pneumatic workshop using stamping machine module 5 conclusions a plc-based controller for stamping machine module has been successfully designed and developed. it has been evaluated by simulating the stamping operation on fluid sim-p and real running on electro-pneumatic workshop. this module is equipped with a manual book that can be used for assembling learning, troubleshooting, and programming a plc-based electro-pneumatic system. future studies will focus on developing other automation process modules. international journal of applied sciences and smart technologies volume 1, issue 2, pages 113–120 p-issn 2655-8564, e-issn 2685-9432 120 acknowledgements this research was financially supported by polytechnic education development project, ditjen-dikti. the author thanks the colleges of politeknik mekatronika sanata dharma for some discussions. references [1] s. hesse, 99 examples of pneumatic applications, esslingen: festo ag & co, 2001. [2] festo, modular system for mechatronics training, festo corporation, 2007. [3] smc, “fms-200:flexible integrated assembly systems,” smc international training, 2019. [4] r. singh and h. k verma, “development of plc-based controller for pneumatic pressing machine in engine-bearing manufacturing plant,” procedia computer science, 125 (2), 449 458, 2018. [5] festo, “fluid simr4 pneumatics user guide,” march 2006. [6] omron, “cx-programmer ver.9.0,” december 2009. international journal of applied sciences and smart technologies international journal of applied sciences and smart technologies volume 3, issue 2, pages 145–152 p-issn 2655-8564, e-issn 2685-9432 145 personal assistant robot ziany alpholicy x. 1,* , sagar s. bhandari 1 , praveen p. dsouza 1 , divanshu c. raina 1 1 xavier institute of engineering, opposite s.l. raheja hospital, mahim (west), mumbai 400037, maharashtra, india * corresponding author: ziaxavier@gmail.com (received 02-08-2020; revised 29-12-2021; accepted 29-12-2021) abstract since the boom in science and technology, humans have been trying to invent machines that could reduce their efforts in day to day activities. in this paper, we develop a personal assistant robot that could pick up objects and return it to the user. the robot is controlled using an android application in mobile phones. the robot can listen to user’s command and then respond in the best way possible. the user can command the robot to move to given location, capture images and pick objects. the robot is equipped with ultrasonic sensor and web camera that helps it to move to different location effectively. it is also equipped with sleds that play important role in object picking process. the robot uses a tiny yolov3 model which is rigorously trained on several images of the object. there are some possible improvements that can be achieved which could help this robot to be used in several other fields as well. keywords: actuators and sensors, tcp socket, object detection international journal of applied sciences and smart technologies volume 3, issue 2, pages 145–152 p-issn 2655-8564, e-issn 2685-9432 146 1 introduction in recent years, humans have been creating various machines to help the physically challenged people. as people age they tend to face challenges in their every-day life and hence they require assistance from others to carry out their routine work. they are challenged to move physically and if they need to pick up objects they may require someone’s assistant. also, there are workers in factories that might face difficulties while working as they might be surrounded with hazardous chemicals or dangerous machines. people working in hospitals might also be exposed to several diseases while handling the relocation of different medical equipment. in this paper, we develop a personal assistant robot that a user can operate without physically moving from their location. there exists many such robots that can carry out routine jobs automatically. although many of them are still incapable of eradicating the above mentioned problem. 2 research methodology a technical research paper in 2015, published by the students of indian institute of information technology, chittoor, described the development of an assistant robot that can be operated using speech commands and that can be used in hospitals, homes, industries and educational institute. they developed a robot that can be controlled using human voice. the robot could move to different locations and relocate an object from one place to another. they implemented a robotic arm by calculating several parameters such as variation of angle and distance between the robotic hands with time, angular velocity of the robotic arm [1]. in 2005, another research paper was published by some students of national chiao tung university, which described the development of develop a personal assistant robot which should be able to assist the user physically with real movement and actions. they developed the robot that had its own intelligent sensors and actuators. they also implemented the face tracking function which was achieved by radial basis function type neural network (rbfnn). they implemented several features like home care, remote monitoring, security, etc [2]. international journal of applied sciences and smart technologies volume 3, issue 2, pages 145–152 p-issn 2655-8564, e-issn 2685-9432 147 recently, in 2018, another research paper was published where an interactive personal assistant robot was developed using the raspberry pi computing engine. the robot that they developed was self-balancing which was implemented with the help of principle of dynamic balancing. they used google text to speech (gtts) to convert the voice commands into texts which can be recognized by the computing engine. they used bluetooth services to maintain connection between the robot and the mobile application. they also implemented multiple face recognition system so that they could replace the robot with the security guard [3]. more information can be found in the literature [4], [5], [6]. 3 results and discussion the object detection model used here is yolo v3. the raspberry pi, due to its limited computing power, cannot be used to implement the original yolo model. hence, we use yolo tiny, a tiny and yet efficient version of yolo. for the datasets we used google’s open images dataset v6. to train the model, we have to set the number of batches for the datasets so as to determine the iteration of training the model. the ideal number of iterations should be 2000 batches per number of objects. in our case we are dealing with 7 objects so the batches should be 14,000. while the model is training the prime objective of the algorithm is to decrease the average loss. we started with average loss of 4.5 and reached to an average loss of 1.08. figure 1 shows the graph depicting the decrease in average loss as the number of iteration increases. the robot and android are connected through tcp socket connection. the raspberry pi is cond to work as a standalone network and whenever booted up starts its hotspot. the android application is required to connect to the pi hotspot. a server socket is created by raspberry pi upon boot up which waits for the client to connect to the server. as soon as the android application is connected to pi hotspot through wi-fi network, the user can create a client socket and connect to the server listening on pi. the clientserver socket connection then can be used to transfer data between the robot and the android application. in order to execute the movement of robot from its starting place to the desired location, the paths need to be set within robot. the robot uses this pre-defined path to international journal of applied sciences and smart technologies volume 3, issue 2, pages 145–152 p-issn 2655-8564, e-issn 2685-9432 148 reach to a particular location. there can be several paths for different location within an area. to set a path, the user is supposed to move the robot from its home location to that particular spot. while the robot is moving it records the command given to it by the user and save it in the dynamic python list. when the user completes the moving job, the robot saves the path from the list in the form of text file and saves it into its path directory. the robot also computes the returning path just by reversing the commands given it by the user. so, two files are saved as one complete path and these files are used to execute the movement of robot from home location to path and vice versa. when the user commands the robot to pick an object, the robot uses the path text files saved within the path directory to reach to the desired location. as soon as the robot reaches the location, it uses its webcam to capture the images and then uses those images to search for the object which user has decided to pick. it uses yolo algorithm to detect the object in the captured image. as soon as the object is detected, the robot uses the ultrasonic sensor to measure the distance between itself and the object. according to the readings of sensor, it then relocates itself as much close to the object such that it can pick it up. after picking the object it then gets back to its home location using the path files. figure 2 and figure 3 show the object detection result on two different objects. figure 4 shows the robot with sled opened while figure 5 shows the robot with sled closed. figure 1. graph depicting the decrease in average loss as the number of iteration increases international journal of applied sciences and smart technologies volume 3, issue 2, pages 145–152 p-issn 2655-8564, e-issn 2685-9432 149 figure 2. the object detection result on two different objects figure 3. the object detection result on two different objects figure 4. the robot with sled opened international journal of applied sciences and smart technologies volume 3, issue 2, pages 145–152 p-issn 2655-8564, e-issn 2685-9432 150 figure 5. the robot with sled closed 4 conclusion the robot developed can be used in different scenario ranging from normal home use to hospital or chemical industries. it is equipped with intelligent features that can be used for efficiently picking several objects. the path storing and object detection via ultrasonic sensor helps the robot to precisely locate from one place to another. the robot can be equipped with voice recognition feature enabling its usage effectively and directly to physically challenged people. the robot can also be equipped with speakers which can speak out information regarding the position of the robot relative to the path saved in its memory. it can also speak out list of object available, paths set within its memory. the robot can also be equipped with google coral in order to increase its speed and allow the user to get live stream video from robot’s webcam. it will also increase its speed in detecting object. the number of ultrasonic sensor can also be doubled so as to prevent the repeated checking of object by turning left. there’s an alternate solution to this solution i.e. using stepper motor to use one ultrasonic sensor for both the direction. the robot sled can also be converted to robotic hands or design inspired from claw to increase its capabilities of picking objects. international journal of applied sciences and smart technologies volume 3, issue 2, pages 145–152 p-issn 2655-8564, e-issn 2685-9432 151 references [1] a. mishra, p. makula, a. kumar, k. karan and v. k. mittal, “a voice-controlled personal assistant robot.” international conference on industrial instrumentation and control (icic), 2015. [2] i.h. shanavas, p.b. reddy and m.c. doddegowda, “a personal assistant robot using raspberry pi.” international conference on design innovations for 3cs compute communicate contro, 2018. [3] c.h. lin, h. andrian, y.q. wang, and k.t. song, “personal assistant robot.” proceedings of the 2005 ieee international conference on mcchafronics, 2005. [4] real-time object detection with deep learning and opencv, https://www.pyimagesearch.com/2017/09/18/real-time-object-detection-with-deeplearning-and-opencv/ [5] train your own tiny yolo v3 on google colaboratory with the custom dataset, https://medium.com/@today.rafi/train-your-own-tiny-yolo-v3-on-googlecolaboratory-withthe-custom-dataset-2e35db02bf8f [6] the ai guy. (2020, january 28). yolov3 in the cloud: install and train custom object detector (free gpu) [video file]. retrieved from https://www.youtube.com/watch?v=10jorjt39ns&t=1440s international journal of applied sciences and smart technologies volume 3, issue 2, pages 145–152 p-issn 2655-8564, e-issn 2685-9432 152 this page intetntionally left blank international journal of applied sciences and smart technologies international journal of applied sciences and smart technologies volume 3, issue 1, pages 27–34 p-issn 2655-8564, e-issn 2685-9432 27 human detection in video surveillance sushama khanvilkar1, santosh gupta1, hinal rane1, calvin galbaw1,* 1department of computer engineering, xavier institute of engineering, mahim, mumbai, maharashtra, india *corresponding author: calving2012@gmail.com (received 17-07-2020; revised 31-07-2020; accepted 01-08-2020) abstract recognition of the human activities in videos has gathered numerous demands in various applications of computer vision like ambient assisted living, intelligent surveillance, human-computer interaction. one of the most pioneering techniques for human detection in video surveillance based on deep learning and this project mainly focuses on various approaches based on that. this paper provides an idea of solution to use video surveillance more effectively, by detecting any humans present and notifying the concerned people. the deep learning model, preferred for fast computation, convolution neural network is used by stacking 3 blocks of layers on fully connected layers. this provided an identification of humans and naïve approach to eliminate inanimate human like objects such as mannequins. keywords: deep learning, cnn, human detection 1 introduction human activity detection is a major problem in smart videos surveillance. it is an elementary drawback in computer vision, i.e. to notice the activity of human in mailto:calving2012@gmail.com international journal of applied sciences and smart technologies volume 3, issue 1, pages 27–34 p-issn 2655-8564, e-issn 2685-9432 28 surveillance videos. these applicants need real time detection performance, but it is generally very time consuming to detect the actual activity. since the use of cctv, the cases of forced entries and robberies have decreased drastically. but the delay in response to such cases can cause problems. if the owner can get the notification of such events, the culprit can be caught red handed. it becomes important to alert the user by detecting what activity is been performed by the subjects [1-3]. 2 research methodology this prospective implementation was carried out using simple programming tools and cloud resources. the convolutional neural network (cnn) is the most promising network to work with images and videos. hence, developing an architecture using cnn was an optimal and efficient choice. implementation design. in order to implement the system modified alexnet design which is trained on frames of video has been used. dataset size. 8 videos have been used as a dataset. sample size calculation. the sample size was chosen from multiple videos which satisfies the needs of the required datasets. each video chosen have average of 8000 frames from which about 10% are taken into consideration. this is to reduce redundancy of the data. subjects and selection method. the dataset is formed of videos which are taken using cctv cameras. these videos all include people trying to break into the shops and houses. some videos also include mannequins and are taken mostly at night. the dataset are labeled according to visibility of humanity. international journal of applied sciences and smart technologies volume 3, issue 1, pages 27–34 p-issn 2655-8564, e-issn 2685-9432 29 figure 1. block diagram preprocessing methodology. the main source is a raw video recorded by the cctv, as in figure 1. such videos have very high fps and using such videos requires a lot of processing power by the system. to reduce the processing power, we reduce the fps i.e. dividing current fps by 10. these frames are further sent to be processed [4]. the frames extracted from the videos are rgb images. processing of images begins with resizing the image into 227 × 227 and then converting rgb images into grayscale images. this method converts or compresses the three channels of rgb to a single channel. this single channel contains the values of luminance. luminance can also be described as brightness or intensity, which can be measured on a scale from black (zero intensity) to white (full intensity). therefore, the output will have the monochromatic range of black and white. most of the theft and break-ins occur at night; hence the images will be dark and will be not clear. to brighten up the image techniques like histogram equalization, alpha and beta transformation can be used. we choose histogram equalization to brighten up the images. histogram equalization improves the contrast of the image by spreading out the most frequent intensity values. to remove the noise from the images, blurs are used. this reduces the sharpness of the image and smoothen it. gaussian blur is the most popular blur and is used for processing. blur also helps in detection of edges and for thresholding. thresholding international journal of applied sciences and smart technologies volume 3, issue 1, pages 27–34 p-issn 2655-8564, e-issn 2685-9432 30 converts the image to have only two intensities or values. thresholding using otsu is used in the project. finding edges using canny edge detection is the last pre-processing of the image. edges help reduce the processing done by the neural model. it reveals the important parts of the image discarding others and helps in extraction of features by cnn. neural network model. human detection module will make use of a convolutional neural network to detect and recognize human in the video surveillance. for creating the network, cnn are regularized versions of multilayer perceptron. multilayer perceptron’s usually means fully connected networks, that is, each neuron in one layer is connected to all neurons in the next layer. cnn uses relatively little preprocessing compared to other image classification algorithms. this means that the network learns the filters that in traditional algorithms were hand engineered. the structure of network consists of different components: input layer, hidden layer and output layer. input layer is reflecting the potential descriptive factors that may help in prediction. hidden layer is defined number of layers with a specified number of neurons in each layer. output layer is reflecting the thing is a human present or not. the cnn architecture used is a modified alexnet. the input is a series of 3 continuous frames to help whether the entity is a human or a human like mannequin. due to this, each frame in the input stack is has its own cnn layers. the features extracted or output of the cnn layers are concatenated and given to the fully connected network. the classification of the images is done by using the softmax activation layer. figure 2 depicts the cnn block for each frame [5], [6]. the fully connected network is illustrated in figure 3. figure 2. cnn block international journal of applied sciences and smart technologies volume 3, issue 1, pages 27–34 p-issn 2655-8564, e-issn 2685-9432 31 figure 3. fully connected network 3 results and discussion the model classifies the data properly at the accuracy rate of 87%. this accuracy is measured by feeding the test data containing both positive and negative labelled images. from the predicted labels, the number of correctly labelled data, positive and negative both, is divided by the total number of data gives the accuracy of the model. the model is trained also in the way that it does not detect mannequins. the model implementation uses android gui to alert the user of the cctv and system. this will help the damage done due to the robbery or catch the intruder. the model is fast and efficient but the delay due to cloud and pre-processing hamper the performance a little bit. this can be neglected by using faster network speed and faster hardware. international journal of applied sciences and smart technologies volume 3, issue 1, pages 27–34 p-issn 2655-8564, e-issn 2685-9432 32 example. figure 4 shows the correct prediction on the gui of the system. this depicts the notification and alert used in the system. figure 4. prediction on android gui figure 5 shows the incorrect prediction which is also called false positive. this depicts that the model used is not perfect having accuracy of 87%. figure 5. false positive prediction international journal of applied sciences and smart technologies volume 3, issue 1, pages 27–34 p-issn 2655-8564, e-issn 2685-9432 33 discussion. human detection in video surveillance using deep learning techniques is the growing area in the field of computer vision. in general, human detection is the process of automatically finding the action in the sequence of videos. in this project, we are making use of convolutional neural network. a convolutional neural network (cnn) is an artificial neural network architecture targeted at pattern recognition. in cnn is the methods require labels which are difficult to attain due to the video high dimension information. on this, human activity is detected using this algorithm and after that user can be alert through android application about the subjects. environment sensing is the process of detecting a change in the position of an object relative to its surroundings or a change in the surroundings relative to an object. the performance of the system can be enhanced by detecting the changes in its surrounding and it can adapt to the change at the same time. for example, if there is any moment in the shop after closing then the system will alert the user about suspicious activity by send alert message that can user take action on it. our main goal was to detect human at low visibility due to night time. the deconstruction of implementation is as follows: 1. initially, the input video is taken from video surveillance. 2. this video is processed by the video processing which is used to detect the human activity in the video by frame by frame. 3. the output video is provided to the network to identify the human detection using cnn model. 4. the output of model is sent to user to alert about human activity to take action via application. 4 conclusion the accuracy of actually catching a robbery is not calculated in the study but this will reduce the success rate. the purpose of project is to achieve goal to find techniques for behavioural identification. various techniques for motion recognition based on deep learning. today, human activity detection in video is the most popular hot space. for security purposes, behaviour recognition can be use in shop or mall. for example, the international journal of applied sciences and smart technologies volume 3, issue 1, pages 27–34 p-issn 2655-8564, e-issn 2685-9432 34 use of cctv, the cases of forced entries and robberies have decreased drastically. but the delay in response to such cases can cause problems. if the owner can get the notification of such events, the culprit can be caught red handed. it becomes important to alert the user by detecting what activity is been performed by the subjects. acknowledgements we (the authors) thank mrs. sushama khanvilkar who gave valuable suggestions and ideas when we were in need of them. she encouraged us to work on this project. we are also grateful to our college for giving us the opportunity to work with them and providing us the necessary resources for the project. working on this project also helped us to do lots of research and we came to know about so many new things. references [1] r. khurana and a. kushwaha, “deep learning approaches for human activity recognition in video surveillance a survey.” first international conference on secure cyber computing and communication (icsccc), 542–544, 2018. [2] a. khaleghi and m. moin, “improved anomaly detection in surveillance videos based on a deep learning method.” 8th conference of ai & robotics and 10th robocup iranopen international symposium (iranopen), 73–81, 2018. [3] l. anishchenko, “machine learning in video surveillance for fall detection.” ural symposium on biomedical engineering, radio electronics and information technology (usbereit), 99–102, 2018. [4] https://www.pyimagesearch.com/2019/07/15/video-classification-with-keras-anddeep-learning/ [5] https://towardsdatascience.com/introduction-to-video-classification-6c6acbc57356 [6] https://www.ncbi.nlm.nih.gov/pmc/articles/pmc5469670/ https://www.pyimagesearch.com/2019/07/15/video-classification-with-keras-and-deep-learning/ https://www.pyimagesearch.com/2019/07/15/video-classification-with-keras-and-deep-learning/ international journal of applied sciences and smart technologies international journal of applied sciences and smart technologies volume 4, issue 2, pages 233 -240 p-issn 2655-8564, e-issn 2685-9432 233 this work is licensed under a creative commons attribution 4.0 international license design of someone's character identification based on handwriting patterns using support vector machine r. a. kumalasanti1,* 1department of informatics, faculty of science and technology, sanata dharma university, yogyakarta, indonesia *corresponding author: rosalia.santi@usd.ac.id (received 01-11-2022; revised 21-11-2022; accepted 28-11-2022) abstract image processing has a fairly broad scope and is rich in innovation. today, image processing has developed with various reliable methods in almost all aspects of life. one of the uses of technology in the field of image processing is biometric identification. biometric is a system that utilizes specific data in the form of individual physical characters in the process of identifying and validating data. there is also a biometric attribute that will be developed in this study is handwriting. the handwriting pattern of each individual has a different character and uniqueness so that it can be used as an identity. the uniqueness of this handwriting will be studied with the aim of recognizing a person's character or personality. if someone's personality data has been obtained, this can help the process of recruiting prospective employees in a company by simply reading from handwriting patterns. handwriting can be studied by combining the science of psychology so that it can provide output in the form of a person's characteristics or personality. this research will be developed using the multi class support vector machine (svm) classification. the preprocessing stage in the form of http://creativecommons.org/licenses/by/4.0/ mailto:rosalia.santi@usd.ac.id international journal of applied sciences and smart technologies volume 4, issue 2, pages 233 -240 p-issn 2655-8564, e-issn 2685-9432 234 binarization, thinning and data extraction will also greatly affect the reliability of the system. simulations with variations of variables and parameters are expected to obtain optimal accuracy. keywords: handwriting, biometric, svm 1 introduction digital image processing is digital manipulation and interpretation of images using a computer. image processing has a fairly broad scope and is still being developed in various aspects of life. this shows that image processing has a good impact on technological developments in this era. digitalization is something that is already familiar to hear because almost all daily activities utilize digital data. information in the form of digital data has a flexible nature so that it can be used for certain purposes. when covid-19 hit, activities & social interactions were increasingly restricted in order to break the chain of the virus. this has an impact on social sustainability which is increasingly limited. an example is the recruitment of prospective employees which is quite complex, starting from a psychological test to interviews. the phsycotest aims to find out the character of the prospective employee and an interview test is conducted to see the gestures and how to answer each question given to the prospective employee to be assessed. some of these tests are sometimes added to other tests which are quite time-consuming. this becomes less effective when it has to be done in the present because social needs are being restricted. here image processing can be a solution, namely by using digital data to identify a person's character through handwriting. handwriting is one of the biometric attributes that can be used to identify other than fingerprints, facial recognition, voice, and many more. a person's hand writing has its own uniqueness because it can be seen from the firmness of the strokes, the distance between one word and another, the distance between the lines / boundaries of the paper used [1]. these characteristics can be used to see explicitly the characteristics of a person as an individual. if a person's character has been obtained, the recruitment activities for prospective employees can run effectively and efficiently without having to meet face to face. international journal of applied sciences and smart technologies volume 4, issue 2, pages 233 -240 p-issn 2655-8564, e-issn 2685-9432 235 in this research, we will try to use svm (support vector machine) with the same data set, namely handwriting. handwriting will be retrieved from manual writing offline (on paper) and then scanned for pre-processing to the testing stage. handwriting will be trained and tested to get the characteristics of each individual. 2 research methodology handwriting is one of the biometric attributes that can be used as authentic data because each individual's handwriting is unique. emphasis when writing, the firmness of the strokes, the distance between one word and another, the distance between the lines and the margins of writing and much more. in this study, handwriting is used to read a person's character. computer science and psychology will be collaborated to find optimal solutions. 2.1 biometrics. biometrics is a science and innovation to describe information from the human body naturally [2]. biometrics refers to individual differences based on physiological or social attributes. some of the biometrics on the human body are iris, hand geometry, signatures, fingerprints, face, handwriting and many more. biometric values themselves can be in the form of individual physiological or social attributes that meet completeness needs. 2.1 preprocessing. preprocessing is a stage that aims to improve the quality of the images obtained so that they are easier to process at a later stage [3]. the data set in the form of handwriting will be scanned to obtain a digital image. the stages of this preprocessing are cropping, negative images, image binaryization, and thinning. cropping is used to maximize the information retrieved so that there is not too much information that is not needed. binaryization is used to process data from rgb to black and white. this aims to ease data computation and can provide optimal data sets in the next stage. thinning is also done to thin out the writing so that the writing has the same thickness even though the thickness of the pen varies. international journal of applied sciences and smart technologies volume 4, issue 2, pages 233 -240 p-issn 2655-8564, e-issn 2685-9432 236 2.2 svm (support vector machine). this research will utilize the support vector machine (svm) method in conducting data training and testing. svm is a technique for finding hyperplanes that can separate two data sets from two different classes [4]. there are also advantages of svm, namely being able to determine distances using support vectors so that the computation process becomes faster and more effective. svm is one of the methods in supervised learning which is usually used for classification and regression [5]. in classification modeling, svm has a more mature and clearer concept mathematically compared to other classification techniques [6]. svm can also solve linear and non-linear classification and regression problems. figure 1. illustration of class separation in svm [4] the best hyperplane as seen in figure 1 can be found with max𝐿𝐷 = ∑𝑎𝑖− 𝑛 𝑖=1 1 2 ∑ 𝑎𝑖𝑎𝑖𝑦𝑖𝑦𝑗𝑥𝑖𝑥𝑗 𝑛 𝑖=1,𝑗=1 in condition that ∑𝑎𝑖𝑦𝑖 = 0, 𝑛 𝑖=1 𝑎𝑖 ≥ 0 not all data can be separated linearly, while svm is basically only able to separate data linearly, so a development is needed to make svm able to separate non-linear data, one of which is by adding a kernel function. by adding the kernel function to svm, data x will be mapped to a higher vector space so that a hyperplane can be constructed. hyperplane illustration can be seen in figure 2. international journal of applied sciences and smart technologies volume 4, issue 2, pages 233 -240 p-issn 2655-8564, e-issn 2685-9432 237 figure 2. hyperplane illustration in a higher dimension [4] 3 results and discussions identification of handwriting characteristics which is still being developed because the dataset is in the form of biometric data is an interesting thing to develop. in today's digital era, it is possible to model all forms of activity as concisely as possible and as effectively as possible. moving on from instantaneous individual needs, the timeconsuming process of recruiting prospective employees can be completed in a short and effective way using image processing. the raw data in the form of handwriting after being scanned is then immediately given preprocessing to go to the next stage. raw data and preprocessing results can be seen from table 1. table 1. sample image from preprocessing results id raw data preprocessing result 1 2 3 the appropriate input data will be calculated the distance, namely the distance between words in handwriting. furthermore, if the distance results have been obtained, then the kernel calculation will be carried out with the svm kernel function, namely the linear kernel. the results and predictions obtained after the svm kernel calculation process will then be carried out by cross validation calculations to obtain accurate international journal of applied sciences and smart technologies volume 4, issue 2, pages 233 -240 p-issn 2655-8564, e-issn 2685-9432 238 results from the classification process in kernel calculations. the svm flowchart can be seen in figure 3 below. figure 3. svm flow chart calculation of handwriting patterns using word spacing patterns is given to training images and test images. this research uses computer science and psychology in application. based on the results obtained from the svm calculation, it will then be classified according to a person's character or personality. here it takes a lot of attributes to compare one character with another so that the results obtained can be better. 4 conclusions the results and predictions of the handwritten character identification design using svm and the accuracy obtained after calculations using cross validation are expected to provide optimal accuracy. the results of a person's character classification will be able to help make decisions regarding the recruitment of prospective employees. it is hoped that the results obtained will not only be utilized in the pandemic era but can also be developed in the future. start input data (handwriting) svm output data finish international journal of applied sciences and smart technologies volume 4, issue 2, pages 233 -240 p-issn 2655-8564, e-issn 2685-9432 239 references [1] g. thilagavathi, g. lavanya and n. k. karthikeyan, "tamil handwritten character recognition using artificial neural network," international journal of scientific & technology research , 8(12), 1611-1616, 2019. [2] o. c. ahuja, m. a. mabayoje and r. ajibade, "offline signature recognition & verification using neural network," international journal of computer applications, 35(2), 44-51, 2011. [3] s. saidah, m. b. adinegara, r. magdalena and n. k. pratiwi, "identifikasi kualitas beras menggunakan metode k-nearest neighbor dan support vector machine," jurnal telekomunikasi, elektronika, komputasi, dan kontrol, 5(2), 114-121, 2019. [4] m. athoillah, "pengenalan wajah menggunakan svm multi kernel dengan pembelajaran yang bertambah," join (jurnal online informatika), 2(2), 84-91, 2017. [5] s. khedikar and u. yadav, "identification of disease by using svm classifier," international journal of advanced research in computer science and software engineering, 7(4), 81-86, 2017. [6] p. a. octaviani, y. wilandari and d. ispriyanti, "penerapan metode klasifikasi support vector machine (svm) pada data akreditasi sekolah dasar (sd) di kabupaten magelang," jurnal gussian, 3(4), 811-820, 2014. international journal of applied sciences and smart technologies volume 4, issue 2, pages 233 -240 p-issn 2655-8564, e-issn 2685-9432 240 this page intentionally left blank international journal of applied sciences and smart technologies international journal of applied sciences and smart technologies volume 1, issue 2, pages 169–178 p-issn 2655-8564, e-issn 2685-9432 169 design and development of a path-tracking system based on radio frequency identification sensor for educational toy robot (edot) martinus bagus wicaksono department of mechatronic product design, politeknik mekatronika sanata dharma, yogyakarta, indonesia corresponding author: baguswicax@yahoo.co.id (received 31-01-2019; revised 21-05-2019; accepted 21-05-2019) abstract this paper offers the design and development of a path-tracking system based on radio frequency identification (rfid) sensors. this path-tracking system will be used as a navigation system on edot. the edot requires a navigation system because it must be able to drive from the starting point to the predetermined end point automatically. this path-tracking system uses rfid sensors to detect rfid cards which have been arranged as a path. and then the edot will pass through the path consisting of some rfid cards. edot is a solution of a previous system, called line-follower, which uses infrared as a sensor to detect lines to guide a robot to go towards its destination point. the path-tracking system used by edot can work more efficiently in detecting the path to be traversed than other robots using the line follower system with infrared sensors or ldr (light dependent resistor). keywords: path-tracking, rfid, edot, robot international journal of applied sciences and smart technologies volume 1, issue 2, pages 169–178 p-issn 2655-8564, e-issn 2685-9432 170 1 introduction in recent years many children's toys have used robotic technology [1]. these toy robots are used to develop the ability of children, especially toddlers, to think logically. one of the technologies used is the line-follower robot, where the robot can go along the line that has been prepared beforehand. in the game, the line as a robot guide when driving has been prepared first. this line is prepared so that it can guide the robot from one point (start point) to the destination point (endpoint). in general, the sensor used to detect the presence of lines is an infrared sensor. in fact, the performance of this sensor in detecting lines is strongly influenced by two things, namely the distance of the sensor to the line and the presence of external light. as well as in the game there are shapes and length of the track that cannot be changed according to the player's wishes. thus the game will be limited by a number of things above. this paper offers a design of a path-tracking system based on rfid sensors to detect rfid cards that have been arranged to form a path that the robot will pass. this system is more effective in reading paths that will be traversed by robots when compared to the line follower system that uses infrared sensors. in line follower that uses a light sensor (led and photodiode); the sensitivity of the sensor will be greatly influenced by the light around the robot [2]. another line follower system that uses the ldr sensor as a tracking and navigating sensor on the robot also has almost the same recommendation, which is greatly influenced by the distance of the sensor to the detected line and also the reflection of light from the surrounding environment [3]. in other applications, the rfid sensor is used to detect the identity of the car that will enter the parking area and provide information on the parking position of the car [4]. for the path-tracking system using the rifd sensor, it will use a standard rfid sensor on the market. as well as the rfid card that will be used as a robot track compiler, it will use a standard rfid card on the market. this system will use the arduino nano as a control system. all needed components will use standard components on the market to reduce prices and accelerate the process of making a path-tracking system. international journal of applied sciences and smart technologies volume 1, issue 2, pages 169–178 p-issn 2655-8564, e-issn 2685-9432 171 this paper is written in the following structure: in part 2 it will describe the design of the path-tracking system on edot. the method of making path-tracking system will be described in section 3. while the results of the research and discussion will be written in section 4. 2 design in this section, we explain about the design of our proposed system. rfid sensors are sensors that use radio wave-based technology. this sensor will detect objects called rfid cards. the communication method of these two objects is with the rfid sensor emitting waves that will trigger the power and clock on the rfid card to send data in the form of the identity that has been written on the rfid card. in this design the rfid card will be used and arranged as a path, rfid sensors that will be used to detect trajectories, arduino nano as a control system, dc motor driver used as a motor movement regulator, the dc geared motor is used as the main movements generator of the robot. the block diagram representing the process is shown in the figure 1 below. this path-tracking system is designed using standard components on the market. the purpose is to simplify the replacement of components if damage occurs due to misuse. another reason is that components on the market will be cheaper compared to customized components. figure 1. block diagram of the path tracking system in some applications, rfid cards are used to store personal identity data. and in the existing system the card will be used for student attendance detector [5]. in this design, rfid path-tracking system, the path will use some rfid cards that have been given a different identity according to the required command, as shown in table 1.the cardsare rfid card 125 khz as shown in the figure 2 below. international journal of applied sciences and smart technologies volume 1, issue 2, pages 169–178 p-issn 2655-8564, e-issn 2685-9432 172 table 1. list of rfid card identity no card identity command 1 forward 2 0 turn left 3 turn right 4 stop figure 2. rfid card: turn right, stop, forward, turn left the rfid sensor rc522, as shown in figure 3, is used to detect the rfid cards which have their own identity. the sensor is installed at the bottom of the robot, so that it can directly detect the rfid card which is arranged as the path that edot will pass. the position of the sensor is placed between the main wheels to make the edot move easier along the pre-arranged track. figure 3. rfid sensor a microcontroller, arduino nano, is used to control edot based on reading data on an rfid card carried out by an rfid sensor. the program has been written on a international journal of applied sciences and smart technologies volume 1, issue 2, pages 169–178 p-issn 2655-8564, e-issn 2685-9432 173 microcontroller that will control the rate of edot following the prepared path. data received by the rfid sensor which is the result of reading the rfid card will be an input to the microcontroller. the microcontroller will send the command to the dc motor driver which will adjust the rotation of the dc motor as the main driver of the edot as shown in figure 4. figure 4. arduino nano, dc motor driver (l298n), part of edot figure 5 shows the overall shape of edot that uses the rfid sensor as a pathtracking system. as discussed in the previous paragraph, the rfid sensor is placed on the bottom surface of the edot facing the track. this is so that the sensor can detect the track properly. figure 6 shows edot is moving following the prepared path. rfid cards must be arranged so that they give the same direction as the arrows on the rfid card. figure 5. the edot international journal of applied sciences and smart technologies volume 1, issue 2, pages 169–178 p-issn 2655-8564, e-issn 2685-9432 174 figure 6. edot on the path 3 method this section is devoted for the research method. in order to validate the system, some experiments in path-tracking have been conducted. the purpose of the experiments is to know the responses of the system in tracking a path. some rfid cards with different identities were used to check the system accuracy in detecting the path. the procedure of the experiment is as follows: 1. set the identity of some rfid cards based on the command (forward, turn left, turn right, stop), 2. make 5 cards in every command, 3. run the path-tracking system, 4. detect the rfid card, 5. vary the distance of the sensor to the card, and 6. record the observations of the system. by detecting the different card and varying the distance of the sensor to the card, the data in table 2 to table 7 could be obtained. table 2. detection distance of 1 mm no type of command detection result 1 forward success 2 turn left success 3 turn right success 4 stop success table 3. detection distance of 10 mm no type of command detection result 1 forward success 2 turn left success 3 turn right success 4 stop success international journal of applied sciences and smart technologies volume 1, issue 2, pages 169–178 p-issn 2655-8564, e-issn 2685-9432 175 table 4. detection distance of 20 mm no type of command detection result 1 forward success 2 turn left success 3 turn right success 4 stop success table 5. detection distance of 23 mm no type of command detection result 1 forward fail 2 turn left fail 3 turn right success 4 stop fail table 6. detection distance of 25 mm no type of command detection result 1 forward fail 2 turn left fail 3 turn right fail 4 stop fail table 7. detection distance of 30 mm no type of command detection result 1 forward fail 2 turn left fail 3 turn right fail 4 stop fail to validate the system, it is tested in actual working conditions. the system will be used to read rfid cards that have been arranged in such a way as to form the path that will be traversed by edot. the purpose of this test is to get the right delay in reading the rfid card and sending the command for edot to move according to the identity of the rfid card before reading the next card that has been arranged to form the path that edot will pass. the data shown in table 8 to table 12 below. table 8. time delay of 3 seconds no order of the cards result 1 f – f – f – s fail 2 f – tr – f – s fail 3 f – tr – tl – f – s fail 4 f – tl – tr – s fail table 9. time delay of 1 second no order of the cards result 1 f – f – f – s fail 2 f – tr – f – s fail 3 f – tr – tl – f – s fail 4 f – tl – tr – s fail table 10. time delay of 0.75 second no order of the cards result 1 f – f – f – s fail 2 f – tr – f – s fail 3 f – tr – tl – f – s fail 4 f – tl – tr – s fail table 11. time delay of 0.5 second no order of the cards result 1 f – f – f – s success 2 f – tr – f – s success 3 f – tl – f – s success 4 f – tl – tr – f – s fail table 12. time delay of 0.3 second no order of the cards result 1 f – f – f – s fail 2 f – tr – f – s fail 3 f – tr – tl – f – s fail 4 f – tl – tr – s fail abbreviation f : forward tr : turn right tl : turn left s : stop international journal of applied sciences and smart technologies volume 1, issue 2, pages 169–178 p-issn 2655-8564, e-issn 2685-9432 176 4 results and discussion we shall present our research results and discussion in this section. the experiment data in table 2 to table 4 indicate that no error occur in path detecting process. meanwhile, the experiment data in table 5 up to table 7 indicate that error occur in path detecting process. since there is no problem with arduino program, it is highly possible that the errors are mainly from the hardware configuration. the maximum detection distance is below . the performance of the path-tracking system can be improved by setting the rfid sensor distance to the rfid card in between 1 mm up to . the rigidity of the structure is also important to be taken into account for better result. in the validation test, as we can see in the table 11 that the result of the test gives the best performance when the time delay was set to second. although in the fourth experiment it was not successful. there are several causes for this failure, such as the friction between the surface of the wheel and the trajectory, the thickness of the card that interfere the edot movement, etc. from a number of experiments that have been carried out to produce conditions where the position of the rfid sensor must be installed at a distance of less than from the surface of the track. another parameter is setting the time delay. this will be used to adjust the time duration of the wheel to move after the sensor detects the track. the best delay time is around seconds to get accurate results in reading rfid cards by rfid sensors. 5 conclusion design and development of a path-tracking system based on radio frequency identification sensor for the educational toy robot(edot) have been discussed in this paper. the path-tracking system has been tested by detecting the rfid cards in various distances and different command as well as by using real track out of arranged rfid cards. based on data, some error still occurs in detecting processes. however, the source of the errors has been identified to be followed up. improving the detecting quality of the path-tracking system may become the future research. international journal of applied sciences and smart technologies volume 1, issue 2, pages 169–178 p-issn 2655-8564, e-issn 2685-9432 177 references [1] g. a. demetriou, “mobile robotics in education and research,” in mobile robots: current trends, z. gacovski, ed. croatia: intech, 27 48, 2011. [2] d. a. n. janis, d. pang, and j. o. wuwung, “rancang bangun robot pengantar makanan line follower,” jurnal teknik elektro dan komputer, 3 (1), 1 10, 2014. [3] y. prabowo and s. hepy, “line follower robot berbasiskan mikrokontroler atmel 16,” jurnal ilmiah bit, 8 (2) 44 52, 2011. [4] f. a. imbiri, n. taryana, and d. nataliana, “implementasi sistem perparkiran otomatis dengan menentukan posisi parkir berbasis rfid,” elkomika: jurnal teknik energi elektrik, teknik telekomunikasi, & teknik elektronika, 4 (1), 31 46, 2016. [5] n. sparkhojayev and s. guvercin, “attendance control system based on rfidtechnology,” ijcsi international journal of computer science issues, 9 (3), 227 230, 2012. international journal of applied sciences and smart technologies volume 1, issue 2, pages 169–178 p-issn 2655-8564, e-issn 2685-9432 178 this page intentionally left blank international journal of applied sciences and smart technologies international journal of applied sciences and smart technologies volume 3, issue 1, pages 55–64 p-issn 2655-8564, e-issn 2685-9432 55 stone, paper, scissors mini-game for ai pet robot aditya aspat1,*, elton lemos1, abhishek ghoshal1 1department of computer engineering, xavier institute of engineering, mahim, mumbai, maharashtra, india *corresponding author: aditya.dir@gmail.com (received 20-07-2020; revised 19-12-2020; accepted 19-12-2020) abstract the artificial intelligence (ai) pet robot is a combination of various fields of computer science. this paper showcases the various functionalities of our ai pet. most of the functionalities showcased use the immage processing modules made available through opencv. the pet robot has various features such as emotion recognition, follow routine, mini-game etc. this paper discusses the mini-game aspect of the robot. the game has been developed by using vgg16 convolutional network for identification of the action performed by the user. to improve the accuracy we have made use of background subtraction which gives removes all the unwanted objects from the background and gives a simple cutout of the users hand. keywords: pet robot, vgg16, background subtraction 1 introduction owning a pet comes with a zillion benefits for our physical as well as mental health [1]. pets give us increased opportunities to go outside, exercise and socialize with people. in today’s time such as the pandemic, having a pet by your side gives us international journal of applied sciences and smart technologies volume 3, issue 1, pages 55–64 p-issn 2655-8564, e-issn 2685-9432 56 company when we are required to social distance from everyone else. however, with all the benefits of owning a pet, there are some drawbacks as well [2]. pets can be a source for transmission of various diseases if they are not properly taken care of. many societies have rules which don’t allow pets inside the buildings. they may start barking in the middle of the night which might cause nuisance in some cases to the neighbors. for the reasons given above, we propose the development of an inexpensive pet using modern technologies like artificial intelligence (emotion recognition and face recognition), internet of things (for communicating with the bot and it’s movement), image processing (building a mini game) which will seamlessly interact with humans and comes with all the benefits while avoiding the drawbacks of owning a pet. 2 existing systems there are a few pet robots that are already available in the market. learning about these systems will help better understand how our system compares to them. this section also looks into the different algorithms that have previously been implemented regarding our robot and its functionalities. there have been multiple attempts at making realistic ai pets with some dating back to 1990s. tamagotchis were handheld digital pets created in japan in 1997 [3]. this representation of pets of pets quickly became popular. the colourfully designed creatures would grow differently based on the level of care provided by the user. in more recent times companies like sony have come up with realistic pets such as the aibo [4]. it was a slew of features such as facial recognition, emotion detection, automatic battery charging and many more. pibo is another such ai pet created by circulus [5]. pibo’s features include weather reports, alarms, notification, taking photos and other interactive functions. there is however one common problem with these products. their cost is upwards of $1000. one of our goals is to create a fully functioning ai pet by using equipment that is cheap and commercially available. international journal of applied sciences and smart technologies volume 3, issue 1, pages 55–64 p-issn 2655-8564, e-issn 2685-9432 57 3 implementation methodology the architecture of our pet closely resembles a living pet. our pet has 4 major parts: • the brain: the computer acts as the brain and helps with all the computation. • the spine: all the messages flow through it, the pi4b acts as a kernel. • the eyes: the noir camera are the eyes of our bot. they help in capturing images. • the limbs: the arduino uno and its motors acts as the limbs and perform various movements as per the commands given to it. hardware design • vision unit: this unit is responsible for what the bot sees. it helps in capturing the images and sending them to the computer for image processing. it comprises of the raspberry 4 and its camera module. we have also used a noir camera which helps the bot in capturing images in low light. the camera’s video stream is hosted using the rpi-cam-web-interface. the computer captures this stream and uses it for various processing features • motor unit: this unit is responsible for the movement of our bot. it comprises of various actuators like dc motor and servos which are controlled by the arduino uno. the power for the motor unit is provided by a 12v battery. the arduino oversees the motor operations but the final decision as to what movement should be performed is taken by the computer. • communication unit: this unit is the tunnel for information transfer from the computer to the pi4b and to the arduino uno and vice versa. this unit consists of a serial interface between the pi4b and the arduino uno which is used by the pi4b to instruct the arduino or forward instructions given by the computer. there is also a socket connection between the pi4b and the computer for wireless communication using tcp. international journal of applied sciences and smart technologies volume 3, issue 1, pages 55–64 p-issn 2655-8564, e-issn 2685-9432 58 • interfacing with robot: the robot has a joystick for an arm. if the user shakes hands with the robot, the robot goes into listening mode and hears any commands given by the user. if the robot cannot understand, the robot will inform the user that it couldn’t understand and then continue what it was doing previously. the list of commands that the robot can obey are: stop: on receiving a stop command, the robot will stop whatever it is doing and will enter the face detection mode. tell me the time: the utc time is grabbed from the servers and is displayed on the screen as well as spoken out loud by the bot. read “book name”: the robot makes use of the google text-to-speech module to read out books to the user. follow me: on receiving the “follow me” command, the bot starts tracking the user’s movement and follows him or her while maintaining a safe distance. play a game: saying this will start the game of stone, paper and scissors. the action performed by the bot is displayed on the screen attached to it. 4 implementation this section elaborates on the algorithms we have used for implementation of the stone, paper and scissors game. how they work, and how integrating all of them makes the game easy to play for anyone. 4.1 vgg16 we have used the vgg16 convolution network model proposed by k. simonyan and international journal of applied sciences and smart technologies volume 3, issue 1, pages 55–64 p-issn 2655-8564, e-issn 2685-9432 59 a. zisserman from the university of oxford in the paper “very deep convolutional networks for large-scale image recognition” [6]. the model achieves 92.7% top-5 test accuracy in imagenet, which is a dataset of over 14 million images belonging to 1000 classes. the architecture [7] of vgg16 is as in figure 1. figure 1. vgg16 architecture 4.2 training deep convolutional neural network models may take days or even weeks to train on very large datasets. a way to short-cut this process is to re-use the model weights from pre-trained models that were developed for standard computer vision benchmark datasets, such as the imagenet image recognition tasks. top performing models can be downloaded and used directly, or integrated into a new model for your own computer vision problems. the way to do this is transfer learning where we apply pre-trained models on our own dataset to get a state of the art model. we used 550 images with variation for all the three cases and achieved an accuracy of 98% in the validation set. this provides a very good platform to test the model in a real time scenario. as we have already seen in previous implementations, the real time accuracy of models is affected due to the deviations from ideal test conditions. training images are shown in figure 2. international journal of applied sciences and smart technologies volume 3, issue 1, pages 55–64 p-issn 2655-8564, e-issn 2685-9432 60 figure 2. training images figure 3. model accuracy in figure 3, metrics for some of the epochs are shown. acc is the accuracy of the model while training the model, that’s is the total number of accurately classified images divided by the number of images in the training set. val loss is the value of the cost function for the validation data. val acc is the accuracy of the validation set, that is the total number of accurately classified images divided by the total number of images in the validation set. some of the barriers we faced while performing real time testing were related to both hardware and software. these have been mentioned in the real time testing explanation below. 5 results and discussion in this section, we provide our research results about real time results for minigame. as shown in the fowchart (see figure 4), right after we turn on the camera, we take a snapshot of an empty background. we make use of background subtraction [8] to take a cutout of the hand. we need to make sure no object in the background does not move, especially within the boundaries of the blue square. in the image below, the left side is the normal image and the right side shows a continuous background subtracted international journal of applied sciences and smart technologies volume 3, issue 1, pages 55–64 p-issn 2655-8564, e-issn 2685-9432 61 image. since the person is the only thing in the image that is moving, an outline comes around the person. figure 4. flowchart for realtime testing what we do next, is that we use the empty frame we captured to detect any new objects moving into the blue square, i.e. our hand. we cut out this part of the image and international journal of applied sciences and smart technologies volume 3, issue 1, pages 55–64 p-issn 2655-8564, e-issn 2685-9432 62 apply blurring so that we reduce the noise that we encounter due to the shortcomings of the camera. as it gets darker, the graininess of the image will increase giving rise to even more noise. 1. the program takes the image of the still background. this will be used later to subtract from the real time image (see figure 5). figure 5. continuous background subtraction 2. we perform the gesture we wish to do and the program captures this image (see figure 6). figure 6. gesture made by user 3. we subtract the current image with the saved image. this gives use the required format of the image (see figure 7). figure 7. background subtraction with hand international journal of applied sciences and smart technologies volume 3, issue 1, pages 55–64 p-issn 2655-8564, e-issn 2685-9432 63 4. we cut out the part of the image we need (hand) which is then fed to the model (see figure 8). figure 8. cut out of hand the previously encountered software issues involved difficulty in extracting just the hand from the image in such a way that it resembles the images that the model was trained on. this is necessary to reduce any real time errors that we may encounter. all these steps had to be performed in order to get our real time images to be fed to the model, as close to the data set as possible. only then would we be able successfully use our trained model. 6 conclusion the proposed algorithms show the functionality of our ai pet robot. we have used vgg16 and background subtraction to implement a simple game of stone, paper and scissors and integrated it in out ai pet robot with various other features. the use of background subtraction has helped us solve the issues we faced while taking a cut out of the action performed by the user. this game can be played by children to pass time and have some kind of entertainment without being exposed to the internet/mobile devices at a young age. international journal of applied sciences and smart technologies volume 3, issue 1, pages 55–64 p-issn 2655-8564, e-issn 2685-9432 64 references [1] healthy pets, healthy people. u.s. department of health & human services. https://www.cdc.gov/healthypets/healthbenefits/index.html (accessed: 02/03/2020). [2] e. paul cherniack, md and ariella r. cherniack, “assessing the benefits and risks ofowning a pet.” canadian medical association journal, 2015. [3] the life and death of tamagotchi and the virtual pet, https://wellcomecollection.org/articles/wst4ex8aahrugfwb (accessed: 13/08/2020). [4] s. aibo: the dog and personal assistant of the future, https://www.forbes.com/sites/moorinsights/2019/05/01/sony-aibo-the-dog-andpersonal-assistant-of-the-future (accessed: 05/03/2020). [5] pibo, https://pibo.circul.us (accessed: 05/03/2020). [6] simonyan, k. zisserman and andrew. very deep convolutional networks for large-scale image recognition. arxiv 1409.1556, 2014. [7] step by step vgg16 implementation in keras for beginners, https://towardsdatascience.com/step-by-step-vgg16-implementation-in-keras-forbeginners-a833c686ae6c (accessed: 08/03/2020). [8] using background subtraction, https://docs.opencv.org/master/d1/dc5/tutorial background subtraction.html (accessed: 10/03/2020). international journal of applied sciences and smart technologies international journal of applied sciences and smart technologies volume 1, issue 1, pages 11–22 issn 2655-8564 11 spur gears transmission analysis on countinous passive motion machine design for shoulder joint felix krisna aji nugraha1,*, antonius hendro noviyanto2 1department of mechatronic product design, politeknik mekatronika sanata dharma, yogyakarta, indonesia 2department of electromedical technology, politeknik mekataronika sanata dharma, yogyakarta, indonesia *corresponding author: felix@pmsd.ac.id (received 14-05-2019; revised 15-05-2019; accepted 15-05-2019) abstract an analysis of gear transmission on a continuous passive machine (cpm) from the 3-dimensional design has been carried out using solidworks software. analysis of the strength of the gear structure is affected by the weight of the patient's arm. analysis of gear transmission that is affected by the load of the passive arm uses static simulation, by entering the patient's arm load. the facilities used are static simulation with the condition of fixed geometry in the parts related of the shaft, the effect of gravity of 10 𝑚/𝑠2, making mesh, and running simulation. the maximum stress that occurs in gear3 𝑧 = 100 𝑖𝑠 4.5524𝑒 + 006 𝑁/𝑚2, the maximum stress on gear2 𝑧 = 80 𝑖𝑠 4.81729𝑒 + 006 𝑁/𝑚2, the maximum stress on gear100 𝑧 = 20 𝑖𝑠 9.08982𝑒 + 006 𝑁/𝑚2. keywords: mesh, static, simulation, continuous passive machine, spur gear. international journal of applied sciences and smart technologies volume 1, issue 1, pages 11–22 issn 2655-8564 12 1 introduction continuous passive machine (cpm) is a therapeutic tool used to train patients in joint movements after joint surgery [1]. the cpm machine is designed to move flexion and horizontal adduction using spur gear transmission. the gears are used to reduce the speed from the actuator and increase torque, so that it can passively move the patient's shoulder to do therapy. with the development of computer aided design (cad) technology is very helpful in designing a product or machine. the process of designing in manufacturing industries used a lot of time. an engineer who has experience in using cad can use various tools/facilities in cad software in various applications in mechanical engineering, so that the time spent designing can be done shorter, productivity and quality can be produced better. one cad software for design and analyzing a 3-dimensional design is solidworks. the purpose of this study was to analyze the strength of material from the gear transmission to the patient's arm load. the strength of material of the gears are analyzed by the stress and strain on the gears. all analyzes of this study use solidworks software. our main references are [2-5]. 2 research methodology to complete the analysis in this study, the steps taken before doing the simulation are as follows. 2.1. research methods the research methods is done by designing 3 dimensional cpm machine, from the results of the design will be analyzed the transmission of straight gears. by using software, the stress and strain experienced by each gear that is exposed to the specified load will be carried out. the research method is carried out by the following steps: a. collect the geometry of the cpm machine that will be designed. b. design 3 dimensional cpm machine with spur gear transmission using solidworks software. international journal of applied sciences and smart technologies volume 1, issue 1, pages 11–22 issn 2655-8564 13 c. analyzing the strength of material of the spur gear transmission of the cpm machine designed with the influence of the load determined using solidworks software. d. analyzing the strength of material of the spur gear of the cpm machine using solidworks software. 2.2. design methods of 3-dimensional cpm machine in the cpm machine that will be designed there are 2 gearboxes that are used to reduce speed and increase the torque of a dc motor actuator. two gearboxes are used for flexion movement and horizontal adduction has the same transmission pair. the method for carrying out the analysis of cpm gears transmission using the solidworks software is as follows: a. design spur gear 1. determine the patient's arm load 2. determine the torque that is required for the movement of the cpm machine 3. determine the gear ratio used in design of the cpm machine 4. determine the level of spur gear transmission that used at cpm machine 5. determine dimensions, modules, number of teeth of spur gears based on the ratio of each gear transmission level 6. determine the material of spur gear in the design of cpm machine b. design of spur gear transmission. it is assumed that the patient's arm weight is 5 𝑘𝑔 and the patient's arm is 80 𝑐𝑚 long, so that the torque produced by the arm is 20 𝑁𝑚. shown in the equation below. 𝑇 = ½ 𝑎𝑟𝑚 𝑙𝑒𝑛𝑔𝑡ℎ 𝑥 𝑎𝑟𝑚 𝑤𝑒𝑖𝑔ℎ𝑡 𝑥 𝑔𝑟𝑎𝑣𝑖𝑡𝑦 𝑇 = 40 𝑐𝑚 𝑥 5 𝑘𝑔 𝑥 10 𝑚/𝑠2 𝑇 = 20 𝑁𝑚 the dc motor actuator used has a torque characteristic of 1.5 𝑁𝑚 and a rotational speed of 70 𝑟𝑝𝑚, so the gear transmission ratio is obtained by the equation international journal of applied sciences and smart technologies volume 1, issue 1, pages 11–22 issn 2655-8564 14   i dnc symbol and description: 𝝉𝑛𝑐 : torque needed 𝜏 𝑑 : dc motor torque 𝑖 : ratio 𝜇 : efficiecy 75% so obtained : 75,17 75,05,1 20       i i i i d nc dnc    from the results of the calculation the ratio is obtained at 17.75. the ratio value is increased for the safety factor to 20. the total ratio can be divided into 2 levels of gear transmission which are equal to 4 and 5. the gears are directly connected to the actuator gear1, then transmitted to gear2. gear2 and gear3 are on the same axis. next gear3 is transmitted to gear4. all gears are determined using the same module, which is equal to 1mm. gear1 is determined to have 20 teeth. so that in the first transmission the number of teeth obtained in gear2 is equal to: 80 204 2 2 112 1 2 1     z xz zxiz z z i for second level spur gear transmissions obtained: international journal of applied sciences and smart technologies volume 1, issue 1, pages 11–22 issn 2655-8564 15 100 205 2 2 122 1 2 2     z xz zxiz z z i from these calculations of the level of the spur gears transmissions ore obtained as shown in figure 1. figure 1. spur gears transmission in cpm machine c. simulation method of structural strength in solidworks is as follows 1. 1.usefasilitas simulation facility. 2. select the static test form. 3. insert the material used for spur gear. 4. determine the fixed of part design. 5. determine the gravity on the spur gear. 6. determine the part of spur gear that affected by patients arm weight/load and enter the value of the. 7. create a mesh on the spur gears. 8. run the simulation. 3 results and discussions in this section will explain the results and discussion of the simulation of the spur gears that used in cpm machine. the analysis that will be carried out in this study includes: international journal of applied sciences and smart technologies volume 1, issue 1, pages 11–22 issn 2655-8564 16 1. analysis of spur gear 1 (𝑧 = 20) 2. analysis of spur gear 2 (𝑧 = 80) 3. analysis of spur gear 3 (𝑧 = 100) 3.1. results from he design of cpm machine is shown as show in the figure 2. figure 2. design of cpm machine 1. simulation analysis of load on gear1 𝒛 = 𝟐𝟎 in the analysis of the results of the design of gear1 with 𝑧 = 20 are shown as show figure 3, figure 4, and figure 5. figure 3. stress on gear1 𝑧 = 20 due to the patient's arm load international journal of applied sciences and smart technologies volume 1, issue 1, pages 11–22 issn 2655-8564 17 figure 4. displacement on gear1 𝑧 = 20 due to the patient's arm load figure 5. strain on gear1 𝑧 = 20 due to the patient's arm load description spur gear1 𝑧 = 20 : material : 1.0503 (𝐶45) mass : 0.0277809 𝑘𝑔 volume : 3.56165𝑒 − 006 𝑚3 density : 7200 𝑘𝑔/𝑚3 weight : 0.277809 𝑁 resultant forces : 51.7429 𝑁 total nodes : 22126 total elements : 13140 international journal of applied sciences and smart technologies volume 1, issue 1, pages 11–22 issn 2655-8564 18 table 1. table of results of analysis due to of patient arm load on spur gear1 𝑧 = 20 type 𝐌𝐢𝐧 max stress von: von mises stress 120.602 𝑁/𝑚2 node: 4339 9.08982𝑒 + 006 𝑁/𝑚2 node: 5019 displacement ures: resultant displacement 0 𝑚 node: 838 0.000249418 𝑚𝑚 node: 227 strain estrn: equivalent strain 3,2199𝑒 − 010 element: 3550 3.18916𝑒 − 005 element: 6033 in table 1. it shows the results of the analysis of the structure due to the patient's arm load on the gear1 𝑧 = 20, the displacement/deflection on gear1 is 0.000249418 𝑚𝑚. and the strain that happened on gear1 is 3.18916𝑒 − 005. the gear1 is still relatively safe because the maximum yield strength is 9.08982𝑒 + 006 𝑁/𝑚2, and far below the allowable yield strength which is equal to 5.800𝑒 + 008 𝑁/𝑚2. 2. simulation analysis of load on gear2 𝒛 = 𝟖𝟎 in the analysis of the results of the design of gear2 with 𝑧 = 80 are shown in figure 6, figure 7, and figure 8. figure 6. stress on gear2 𝑧 = 80 due to the patients arm load international journal of applied sciences and smart technologies volume 1, issue 1, pages 11–22 issn 2655-8564 19 figure 7. displacement on gear2 𝑧 = 80 due to the patient's arm load] description spur gear2 𝑧 = 80: material : 1.0503 (𝐶45) mass : 0.45213 𝑘𝑔 volume : 5.79653𝑒 − 005 𝑚3 density : 7200 𝑘𝑔/𝑚3 weight : 4.43087 𝑁 resultant forces : 49.4847 𝑁 total nodes : 20410 total elements : 12580 table 2. table of results of analysis due to of patient arm load on spur gear2 𝑧 = 80 type min max stress von: von mises stress 48.9205 𝑁/𝑚2 node: 15026 4.81729 + 006 𝑁/𝑚2 node: 19255 displacement ures: resultant displacement 0 𝑚 node: 110 0.000274988 𝑚𝑚 node: 1763 strain estrn: equivalent strain 2.44294𝑒 − 010 element: 5820 1.77017𝑒 − 005 element: 1609 table 2 shows the results of the analysis of the structure due to the patient's arm load on gear2 𝑧 = 80, the displacement/deflection on gear2 is 0.000274988 𝑚𝑚 and the strain that happened on gear2 is 1. 77017𝑒 − 005. the gear2 is still relatively safe because the international journal of applied sciences and smart technologies volume 1, issue 1, pages 11–22 issn 2655-8564 20 maximum yield strength is 4.81729 + 006 𝑁/𝑚2 and far below the allowable yield strength which is equal to 5.800𝑒 + 008 𝑁/𝑚2. 3. simulation analysis of load on gear3 𝒛 = 𝟏𝟎𝟎 in the analysis of the results of the design of gear3 with 𝑧 = 100 are shown in figure 9, figure 10, and figure 11. figure 9. stress on gear3 𝑧 = 100 due to the patients arm load safe because the maximum yield strength is 9.08982𝑒 + 006 𝑁/𝑚2, and far below the allowable yield strength which is equal to 5.800𝑒 + 008 𝑁/𝑚2. 3.2. discussions analysis of the strength of spur gear material due to patient’s arm load, to analyze the gearbox transmission of the affected part of the transmission gear to transmit torque to the spur gear of each gears. this can be simulated because when the rotating gears of each gear will experience the same load and direction . from the design and simulation the maximum stress on the gear1 𝑧 = 20 is 9.08982𝑒 + 006 𝑁/𝑚2. on gear2 𝑧 = 80 the maximum stress that occurs is equal to 4.81729 + 006 𝑁/𝑚2. the maximum stress that occurs in gear3 𝑧 = 100 is equal to 9.08982𝑒 + 006 𝑁/𝑚2. the value of maximum stress indicates that teeth of the spur gear is affected by load of patients arm. the results of the simulations show that material 1.0503 (𝐶45) still safe. this is indicated by the maximum stress occurs in each gear is still far below yield strength. material yield strength is 1.0503 (𝐶45) which is equal to 5.8𝑒 + 008 𝑁/𝑚2. international journal of applied sciences and smart technologies volume 1, issue 1, pages 11–22 issn 2655-8564 21 4 conclusion for designing and simulating three dimensions, it is very helpful in evaluating the design of a device. loading on the simetris section can be assumed on one part that is exposed to the load of the tool. the analysis in this study was carried out with static loading. the possibility of analysis through simulation can still be done using the dynamic loading method references [1] s. miyaguchi, n. matsunaga, k. nojiri, and s. kawaji, “impedance control of cpm device with flex-/extension an pro-/supination of upper limbs,” ieee/asme international conference on advanced intelligent mechatronics, 2007. [2] a. c. lad and a. s. rao, “design and drawing automation using solid works application programming interface,” international journal of emerging engineering research and technology, 2 (7), 157−167, 2014. [3] m. h. h. razali, m. a. h. a. ssomad, s. m. sapuan, m. n. a. noordin, m. hasbullah, and r. syazili, “simulation and analysis of innovative hand tool harvester,” scientific research and essays, 7 (19), 1864−187, 2012. [4] solidworks, tutorial solidworks simulation, dassault systèmes solidworks corporation, 2016. [5] solidworks, an introduction to flow analysis applications with solid works flow simulation, student guide, dassault systèmes solidworks corporation, 2010. international journal of applied sciences and smart technologies volume 1, issue 1, pages 11–22 issn 2655-8564 22 this page intentionally left blank international journal of applied sciences and smart technologies volume 5, issue 1, pages 133-142 p-issn 2655-8564, e-issn 2685-9432 this work is licensed under a creative commons attribution 4.0 international license 133 classification of lung and colon cancer histopathological images using convolutional neural network (cnn) method on a pre-trained models brilly lutfan qasthari1, erma susanti1,*, muhammad sholeh1 1faculty of information technology and business, institut sains & teknologi akprind, yogyakarta, 55222, indonesia *corresponding author: erma@akprind.ac.id (received 04-05-2023; revised 11-05-2023; accepted 11-05-2023) abstract cancer is a severe illness that can affect many young and older people. in indonesia, lung cancer is the leading cause of cancer-related death, whereas colon cancer, with more than 1.8 million cases worldwide in 2018, is the third most common cancer. this study intends to create a model to categorize histological images of lung and colon cancer into five labels to aid medical professionals' categorization job. this study uses a pre-trained model idea known as vgg19 in its cnn (convolutional neural network) technique. the dataset uses 25,000 histological graphic pictures with a ratio of 80% training data and 20% testing data. the classification system for lung and colon cancer contains five categories: lung benign tissue, lung adenocarcinoma, lung squamous cell carcinoma, colon adenocarcinoma, and colon benign tissue. the training result revealed a 99.96% accuracy rate and a 1.5% loss rate. the model can be rated as excellent based on these results. keywords: lung cancer, colon cancer, convolutional neural network, cnn, pre-trained 1 introduction cancer is a dangerous condition that can affect both young and older people. cancer has abnormal characteristics that enable it to target cells or other bodily organs without the affected person knowing it. estimates of cancer incidence and mortality by sex and for the 18 age groups in 2020 for the 185 countries or regions with a population of more than 150,000 in the same year. when the cells lining the lung airways divide improperly http://creativecommons.org/licenses/by/4.0/ international journal of applied sciences and smart technologies volume 5, issue 1, pages 133-142 p-issn 2655-8564, e-issn 2685-9432 134 and uncontrolled to generate abnormal tissue, lung cancer results. the most common cancer that causes death is lung cancer [1]. lung cancer is the leading cause of mortality from cancer in indonesia [2]. in contrast to colorectal cancer, commonly referred to as colon cancer, this type of cancer develops in the colon, or rectum. the rectum and large intestine are digestive system components of the colon that contribute in the production of energy and the elimination of waste. according to statistics from the american institute for cancer research, colon cancer is the third most prevalent cancer worldwide. there were almost 1.8 million infections in 2018 [3]. in addition to diet, lack of fibre, smoking, and alcohol use, age is the most significant risk factor for colon cancer. symptoms of colorectal cancer include changes in bowel habits, stomach pain, blood in the stool, anaemia, fatigue, loss of appetite, and weight loss. artificial intelligence (ai) technology is used in the medical industry as a decisionsupport tool for identifying diseases and helps speed up picture analysis. computer-aided diagnostics can analyze medical photos [3]. the convolutional neural network (cnn) method has been used in several earlier research to demonstrate that cancer may be classified using ai technology, and the resulting model accuracy is good. while compared to manual evaluation by medical experts, ai technology performs computationally more quickly while categorizing lung cancer photos. modelling the lung cancer categorization system takes two hours of computation [4]. in comparison, a physical examination by medical staff takes 10–14 days to identify lung cancer. there are several different pre-trained models available on cnn. a few examples of these architectures are le-net, alex-net, google-net, conv-net, and res-net. in classifying biomedical-based images, the alex-net structure is more likely to reach a high accuracy of 90% [5]. in contrast to the resnet architecture, research utilizing a biomedical-based dataset (diagnosis of colonic adenocarcinoma) was effective in attaining an accuracy of 93% using the resnet architecture [6]. the pre-trained model is used in this study since it performs well for classification. the average accuracy of the alexnet and resnet models is above 90%, based on several prior studies. this work aims to develop a classification system for lung and colon cancer international journal of applied sciences and smart technologies volume 5, issue 1, pages 133-142 p-issn 2655-8564, e-issn 2685-9432 135 from histological images using various pre-trained models and to assess which model performs best given the histopathological images used. 2 methods three convolutional layers and two fully connected layers will be combined to create the convolutional neural network (cnn) approach for classification in this study. additionally, it will take advantage of the vgg pre-trained transfer learning architecture. transfer learning is an approach that makes use of current network infrastructures. there is no need to start from scratch because the cnn architecture utilized for transfer learning has already been learned from prior data. the use of this design will impact the categorization outcomes. 2.1. convolutional neural network a pooling layer, a few convolutional layers (+relu), more convolutional layers (+relu), and another pooling layer are common cnn architectures. the image gets smaller and smaller as it moves through the network, but it also usually gets deeper and deeper (i.e., with more feature maps) because of the convolutional layers. the final layer of the stack-for example, a softmax layer that outputs estimated class probabilities outputs the prediction after adding a standard feedforward neural network made up of a few fully connected layers and relus at the top [7]. 2.2. vggnet reusing the lowest layers of a pre-trained model is frequently helpful to develop an image classifier but need more training data. the vggnet [8] program, created by k. simonyan and a. zisserman, was second in the ilsvrc 2014 challenge. it featured a relatively straightforward and traditional design, consisting of 2 or 3 convolutional layers, a pooling layer, 2 or 3 more convolutional layers, a pooling layer, and so on (for a total of just 16 convolutional layers), plus a final dense network with two hidden layers and the output layer. despite using multiple filters, it only used 3 filters [7]. according to research [9] the cnn architecture produces good results in case study examples of age estimation. the estimating method with a categorization strategy produces satisfactory results. the researchers' challenge with the cnn architecture is to international journal of applied sciences and smart technologies volume 5, issue 1, pages 133-142 p-issn 2655-8564, e-issn 2685-9432 136 develop the optimum loss function with the most gaussian distribution. based on the study's review results, the cnn architecture that provides the best prediction is vgg-16. 2.3. research workflow the research process is shown in fig. 1's flowchart, which begins with collecting dataset, preprocessing (gathering lung and colon cancer image datasets, scaling images, and dividing datasets), building cnn models with sequential and pre-trained models, training and testing data, storing models, implementing models into the flask framework, designing the application gui, and predicting image. figure 1. workflow research 3 results and discussions the initial stages of this research involved gathering datasets (pictured data), preparing the data, developing a model, testing it, and deploying it. pre-trained vgg-19based model is the one being used. 3.1. collecting data the 25,000-image "lung and colon cancer histopathological images" dataset from kaggle served as the source for the image collection. the image collection has five class labels: colon adenocarcinoma, lung squamous cell cancer, benign lung tissue, and colon adenocarcinoma. international journal of applied sciences and smart technologies volume 5, issue 1, pages 133-142 p-issn 2655-8564, e-issn 2685-9432 137 3.2. preprocessing data preparation is a step that the user must complete before they edit or add data to a dataset. because not all incoming data has the same format, the objective is to make understanding easier while reducing confusion during data entry. preprocessing eliminates the possibility of inaccurate or unnecessary data influencing statistics. 80% of the data are used for training and 20% for testing during the preprocessing stage. the dataset is divided into 20,000 training data for image prediction (training and validation), 4,500 testing data, and 500 dummy data. afterwards, the information is kept in google drive to simplify the image classification process. after that, the picture settings are made, and a data generator is made to produce training and test data. 3.3. building cnn architecture model following preprocessing, the next step is the creation of the cnn model. the current study uses pre-trained models. hence it does not create a model from scratch. the pre-trained principle is to replace the starting layer with the desired layer, often known as fine-tuning. the model can be seen in table 1. table 1. the cnn model architecture no. layer output shape paramater 1 input layer 224, 224, 3 0 2 layer vgg19 0 20.024.384 3 global average pooling 2d 512 0 4 flatten 512 0 5 dense 5120 2.626.560 6 dropout 5120 0 7 dense 1 5 25.605 3.4. transfer learning process applying the pre-trained model to carry out the transfer learning process comes after choosing the pre-trained model. the frozen layer on the pre-trained model is where the transfer learning process starts. information on the frozen layer in the context of cnn, using the frozen layer is how to manage the updated weights. a layer's weight cannot be modified once it has frozen. this method can decrease training data computation time while maintaining accuracy. 3.5. results of training and testing data international journal of applied sciences and smart technologies volume 5, issue 1, pages 133-142 p-issn 2655-8564, e-issn 2685-9432 138 forward and backward propagation are used during the training phase of the cnn algorithm. fig. 2 displays the outcomes of training testing and data testing. the model performs well, with a loss on training data of 1.5% and an accuracy of 99.96%. figure 2. the result of accuration training and testing data after performing data training experiments, the matplotlib package is used in the following epoch iteration to show the training data's outcomes. fig. 2 provides an excellent model in addition to a good graph. any model that can run or interpret data without being influenced by noise is good. because it can describe a trend or set of data with a low error rate, the model will not become overly fitted. this results from the model's high accuracy value and minimal loss. the next step is to show a loss graph from the training data used in the previous step. four thousand five hundred photos were tested. when used with test data, the model achieves an accuracy performance of 99.82% accuracy and 2% loss, which is identical to that of the training data. the loss outcomes are displayed in fig. 3. callbacks have shown to be a very efficient way to reduce the amount of time needed for data training. training data can yield good accuracy on the 17th epoch and concludes on the 22nd with 30 epochs and 1 hour and 40 minutes of acquisition time. international journal of applied sciences and smart technologies volume 5, issue 1, pages 133-142 p-issn 2655-8564, e-issn 2685-9432 139 figure 3. the result of loss training and testing data 3.6. classification or prediction result the next step is to test the model by creating image predictions after completing various phases. this prediction test determines whether or not the model can categorize photos. the labels assigned to the prior training data must match the predictions for the images. fig. 4a illustrates the successful image prediction outcomes using the label for colon adenocarcinoma. based on the label given, the label prediction in fig. 4 has a fair chance of coming true. the model correctly predicts the image by the label tested, namely colon adenocarcinoma, with a probability level of 100%. the original image prediction is 768 by 768 in size. figure 4. classification of colon adenocarcinoma label with no filter international journal of applied sciences and smart technologies volume 5, issue 1, pages 133-142 p-issn 2655-8564, e-issn 2685-9432 140 figure 5. colon adenocarcinoma label with filter as seen in fig. 5, the next step is to forecast filtered images using a sample colon adenocarcinoma label with various image forms. the label prediction results could have been better based on the labels given. the filtered image is expected to be 768 × 768 in size before it is resized. the model has not yet forecasted the image per the label examined, namely the colon adenocarcinoma model classifying in the lung squamous cell carcinoma class. predicting the image that will get the following label is the next stage. 4 conclusions the development of the cnn model, which began with the data collection phase and ended with the successful model deployment process, produced good classification results, as shown by the model accuracy results, which reached 99.96% with a 1.5% loss in training data. 99.82% of the data were successfully tested using the evaluate function model, with a 2% loss. because the model used matches the dataset provided, the outcomes at the feature extraction step of data training utilizing images of colon and lung cancer using the vgg19-based pre-trained model can be considered successful. the use of callbacks can also make it easier to train models that include the checkpoint model feature so that the model can quickly assess the weight that came from the training set of data. the accuracy will increase with the number of epochs used, but there is a significant risk of producing a model with an overfit effect if the number of epochs used is high. the international journal of applied sciences and smart technologies volume 5, issue 1, pages 133-142 p-issn 2655-8564, e-issn 2685-9432 141 deployment of the model created through the data training procedure on the flask framework went well. utilizing the flask framework, web applications can classify images by the labels provided using models trained on data. the flask framework can display probability values on the web application interface. acknowledgements the author is grateful for the opportunity to discuss this research with the study program and faculty of information technology and business at ist akprind. we would also like to acknowledge the python software, google colaboratory, flask framework, keras, seaborn, and sklearn packages that were used in this study. references [1] j. ferlay et al., cancer statistics for the year 2020: an overview, int. j. cancer, (2021). [2] m. g. sholih et al., risk factors of lung cancer in indonesia: a qualitative study, j. adv. pharm. educ. res., (2019). [3] d. c. rini novitasari, a. lubab, a. sawiji, and a. h. asyhar, application of feature extraction for breast cancer using one order statistic, glcm, glrlm, and gldm, adv. sci. technol. eng. syst., (2019). [4] r. apsari, y. n. aditya, e. purwanti, and h. arof, development of lung cancer classification system for computed tomography images using artificial neural network, in aip conference proceedings, (2021). [5] t. shanthi and r. s. sabeenian, modified alexnet architecture for classification of diabetic retinopathy images, comput. electr. eng., (2019). [6] s. u. k. bukhari, a. syed, s. k. a. bokhari, s. s. hussain, s. u. armaghan, and s. s. h. shah, the histological diagnosis of colonic adenocarcinoma by applying partial self supervised learning, medrxiv, (2020). [7] a. géron, hands-on machine learning. (2017). [8] simonyan karen and zisserman andrew, very deep convolutional networks for large-scale image recognition, in 3rd international conference on learning representations, iclr 2015 conference track proceedings, (2015). international journal of applied sciences and smart technologies volume 5, issue 1, pages 133-142 p-issn 2655-8564, e-issn 2685-9432 142 [9] p. s. adi, development study of deep learning facial age estimation, international journal of applied sciences and smart technologies (ijjast), 1 (1) (2019) 45–50, 2019. international journal of applied sciences and smart technologies international journal of applied sciences and smart technologies volume 1, issue 1, pages 23–32 issn 2655-8564 23 influences of annealing on the electrical properties of ba0,5sr0,5tio3 dwi nugraheni rositawati department of physics education, faculty of teacher training and education, sanata dharma university, yogyakarta, indonesia *corresponding author: wiwikfis@gmail.com (received 05-05-2019; revised 14-05-2019; accepted 14-05-2019) abstract the research aims to investigate influences of annealing on the electrical properties of ba0,5sr0,5tio3. ba0,5sr0,5tio3 material which was annealed at 900°c for 1, 2 and 4 hours has better mechanical properties. it needs investigation for its electrical contribution, namely the correlation between grain and grain boundaries to values of resistance and capacitance. the changing of electrical properties was controlled by grain, grain boundary and the area between the sample and contact. the electrical properties of ba0,5sr0,5tio3 were investigated by impedance spectroscopy in the room temperature. this method is able to separate the electrical and dielectric properties of the grain, grain boundary and the area between contact with the sample. zsimpwin software was used to find out the equivalent electrical circuit, the resistance and capacitance value. it was observed that with the increase in annealing time the small grains resistance, the grain boundaries resistance, and the large grain capacitance value also increases. the resistance values of small grains and large grains were smaller than the grain boundaries resistance. the value of capacitance-resistance of the small grains and large grains were obtained values that tend to be smaller. international journal of applied sciences and smart technologies volume 1, issue 1, pages 23–32 issn 2655-8564 24 keywords: ba0,5sr0,5tio3, annealing, grain, grain boundary. 1 introduction the rapid development and advancement of technology were influenced by the development of material as its basic material. the development of the material certainly is inseparable from the development of discoveries of properties superior of a material as its basic material [1]. solid materials have been conveniently grouped into three basic categories: metals, ceramics, and polymers, a scheme based primarily on chemical makeup and atomic structure. most materials fall into one distinct grouping or another. in addition, there are the composites that are engineered combinations of two or more different materials. a brief explanation of these material classifications and representative characteristics is offered next. another category is advanced materials— those used in high-technology applications, such as semiconductors, biomaterials, smart materials, and nanoengineered materials [1]. barium titanate material (batio3) was originally discovered in 1941. this material was ferroelectric [2]. the continues research were carried out in line with the discovery of interesting properties on barium titanate (batio3) material, namely the discovery of various attractive properties including the material is very practical because of its very stable chemical and mechanical properties. it has ferroelectric properties [3]. the application of barium titanate material (batio3) includes the fields of thermal, electricity, electromechanics, and electro-optics, namely as multilayer capacitors (mlcs), ptc thermistors, electro-optical equipment, dynamic random access memories (dram) and tunable capacitors for microwave technology [3-5]. barium strontium titanate which has the chemical formula basrtio3 or better known as bst is one type of material in the ceramic group. bst is a ferroelectric material which belongs to the type of perovskite formed from barium titanate (batio3) doped with strontium (sr). addition of strontium to barium titanate is able to change the nature of barium titanate because the nature of a material can be changed by heat treatment and by the addition of other substances [1]. this research was intended for the application of ba0,5sr0,5tio3 as a thermistor ptc. one of the characteristics of ptc is the resistance of material will rise significantly if international journal of applied sciences and smart technologies volume 1, issue 1, pages 23–32 issn 2655-8564 25 the temperature of material increased [6]. the influences temperature on the material change the size of the grain which will cause a shift in the curie point the transition point from ferroelectric to paraelectric in the material ba0,5sr0,5tio3 and phase transition. it shows that changing in electrical properties and transport mechanisms at room temperature and low temperatures are controlled by grain and grain boundaries. the addition of sr to batio3 will reduce the curie temperature to room temperature [5] therefore it is important to examine the electrical conductivity of ba0,5sr0,5tio3 material at room temperature. the stimulus of electrical properties such as electrical conductivity and dielectric constant is the electric field [1]. the method that was used in this study is impedance spectroscopy. impedance spectroscopy is an analytical method that is popular in the research and development of material science. this method provides relatively simple electrical measurements and the results can be related to complex material variables: starting from mass transport, chemical reaction rate, corrosion, amorphous and polycrystalline dielectric behavior, microstructure and the influences of composition on the conductance of solids. the impedance spectroscopic method could separate the electrical and dielectric properties of the grain, grain boundary and the area between contact with the sample. the measurement impedance parameters helps identify the physical process and determines the types of electrical parameters that represent the system [7]. it is important to have an equivalent model that can provide electrical properties. the electrical properties of the material are determined by a series combination between grain and grain boundaries, each of them is represented by a parallel rc element. it can be said that material electrical circuits are equivalent to a series of two parallel rc elements [8]. ba0,5sr0,5tio3 material which was annealed at 900°c for 1, 2 and 4 hours has better mechanical properties than ba0,5sr0,5tio3 material which was sintered [9]. in order to obtain a complete understanding of ba0,5sr0,5tio3 material properties, it is necessary to examine the electrical properties of ba0,5sr0,5tio3 which was annealed 900°c for 1, 2 and 4 hours. for this reason, it is necessary to examine the contribution of electricity, namely the correlation between grain and grain boundary to the value of resistance and capacitance. the research aims to investigate influences of annealing on the electrical properties of ba0,5sr0,5tio3 . international journal of applied sciences and smart technologies volume 1, issue 1, pages 23–32 issn 2655-8564 26 2 research methodology the materials which were used in this study were ba0,5sr0,5tio3 samples. it were obtained from the annealing temperature at 900°c for 1, 2 and 4 hours using ney vulcan furnaces 3-550. the sample ba0,5sr0,5tio3 are shaped like a piece that have a diameter of 10 mm, 2 mm thick and have a mass of 0.5 gr. samples of ba0,5sr0,5tio3 which had been washed were given contact from fiber wire. sample preparation is done by heating samples that have been given silver glue at temperature of 120°c for 1 hour using memmert 1534 furnace. the heating function is to quicken the glue drying process and to further glue the contact wire to the sample. the prepared sample is ready to measured the value of the rlc with the rcl meter which has been calibrated. since the measurements with rcl meter were very sensitive, it were done in a stable sample state. the measuring values are the impedance and phase angle of each sample at a frequency of 50 hz-1 mhz. the measurement starts from a high frequency of 1 mhz to a low frequency of 50 hz to maintain the stability of the reading of the impedance value. data of impedance and phase angles are used to determine the real part of impedance (zreal) and imaginary part of impedance (zimaginary). the data are used as input data for processing data with zsimpwin program to obtain the impedance spectrum in the nyquist plot. the zsimpwin software is used to obtain the equivalent electrical circuit according to the physical state of the sample. by providing input impedance data and selecting the desired electrical circuit, the software will automatically match the impedance curve. the equivalent circuit depends on the character of the measured sample. the value of each electrical component characterizes the electrical properties of the sample. 3 results and discussions variation of zreal vs log frequency on samples annealed at 900˚c for 1, 2 and 4 hours is presented figure 1. the zreal value on samples annealed at 900˚c for 4 hours increase significantly compared to the samples annealed at 900˚c for 1 hour and 2 hours. the international journal of applied sciences and smart technologies volume 1, issue 1, pages 23–32 issn 2655-8564 27 increasing in annealing time will increase the zreal value. it shows decreasing in ac conductivity. figure 1. graph of zreal vs log frequency (annealed at 900˚c for 1 hour, 2 hours and 4 hours) figure 1 also shows that the zreal value of the high-frequency log is much smaller than the low-frequency log. it indicates that zreal at high frequencies, grains contribute to electrical conduction, while at low frequencies the role is at the grain boundary. the changing in impedance spectrum occurs more often at low frequencies, namely at the grain boundary because it is a less stable area. figure 2. comparison of nyquist plots (annealed at 900˚c for 1 hour, 2 hours and 4 hours) 0 2000 4000 6000 -2 -1 0 1 2 3 4 z r e a l (k o h m ) log {f(khz)} serie s1 1 hour 2 hours 0 2000 4000 6000 0 500 1000 1500 z im a ji n e r (k o h m ) zreal (k ohm) series1 series2 series3 1 hour 2 hours 4 hours international journal of applied sciences and smart technologies volume 1, issue 1, pages 23–32 issn 2655-8564 28 figure 2 shows comparison of nyquist plots (annealed at 900˚c for 1 hour, 2 hours and 4 hours). it shows that the increasing time of annealing causes the impedance spectrum curve to become larger and higher. it indicates the changing in the resistance and capacitance values in the sample. the value of electrical elements can be obtained by modeling the electrical circuit from the nyquist curve with fittings using the zsimpwin program. after fitting in several electrical circuits, the most suitable electrical circuit obtained which is the model circuit in the sample is as shown in figure 3. figure 3. equivalent electrical circuit; r0 = interface resistance; r1 & c1 = resistance & grain capacitance (small size); r2 & c2 = grain boundary resistance & capacitance; r3 & c3 = grain resistance & capacitance (large size) comparison of measurement data with the results of fittings on samples annealed at 900˚c for 1 hour, 2 hours and 4 hours are shown in figure 4, figure 5 and figure 6.. figure 4. comparison of measurement data & the results of fitting on samples annealed at 900˚c for 1 hour annealing 900˚c 1 jam 0,00e+00 5,00e+05 1,00e+06 1,50e+06 2,00e+06 2,50e+06 3,00e+06 3,50e+06 0,00e+00 5,00e+05 1,00e+06 1,50e+06 z real (ohm) z i m a ji n e r ( o h m ) series1 series2 data pengukuran hasil fitting 19,84ω 18,3kω 13,36m ω 480,9kω 1,43μf 0,68μf 0,99μf measurement data fitted values international journal of applied sciences and smart technologies volume 1, issue 1, pages 23–32 issn 2655-8564 29 figure 5. comparison of measurement data & the results of fitting on samples annealed at 900˚c for 2 hours figure 6. comparison of measurement data & the results of fitting on samples annealed at 900˚c for 4 hours annealing 900˚c 2 jam 0,00e+00 5,00e+05 1,00e+06 1,50e+06 2,00e+06 2,50e+06 3,00e+06 3,50e+06 0,00e+00 5,00e+05 1,00e+06 z real (ohm) z i m a ji n e r ( o h m ) series1 series2 data pengukuran hasil fitting 19,27ω 379,5kω 14,53m ω 14,91kω 1,19μf 0,68μf 1,60μf annealing 900˚c 4 jam 0,00e+00 3,00e+06 6,00e+06 9,00e+06 1,20e+07 1,50e+07 1,80e+07 0,00e+00 2,00e+06 4,00e+06 6,00e+06 8,00e+06 1,00e+07 1,20e+07 z real (ohm) z i m a ji n e r ( o h m ) series1 series2 data pengukuran hasil fitting 163ω 411,3kω 38,77m ω 0,1ω 0,20μf 0,10μf 9,07e15f measurement data fitted values measurement data fitted values international journal of applied sciences and smart technologies volume 1, issue 1, pages 23–32 issn 2655-8564 30 the resistance and capacitance values which were obtained from the results of fitting by zsimpwin were summarized in table 1. table 1. value of resistance and capacitance table 1 shows that the resistance values of small grains and large grains were smaller than the grain boundaries resistance. grains were more stable than grain boundaries so that the resistance value were smaller. it was observed that with the increase in annealing time the small grains resistance, the grain boundaries resistance, and the large grain capacitance value also increases. the annealing process can be used to reduce the thermal stress that is known from cracks which are found to be smaller according to the increase in annealing time. annealing is also able to improve microstructure which is to produce smaller, homogeneous grains and not to find porosity [9]. the contact area between one item and another becomes more and more so that it produces resistance and the capacitance gets bigger. the value of capacitance-resistance of the small grains and large grains were obtained values that tend to be smaller, this is probably due to the distribution of grains that have not been homogeneous in the sample layer [9]. 4 conclusions influences of annealing on the electrical properties of ba0,5sr0,5tio3 can be written as follows: a. it was observed that with the increase in annealing time the small grains resistance, the grain boundaries resistance, and the large grain capacitance value also increases. circuit element annealed at 900˚c 1 hour 2 hours 4 hours r0 (ω) 19,84 19,27 163 c1 (μf) 1,43 1,19 0,20 r1 (kω) 18,3 379,5 411,3 c2 (μf) 0,68 0,68 0,10 r2 (mω) 13,36 14,53 38,77 c3 (f) 0,99𝐸 − 6 1,60𝐸 − 6 9,07𝐸15 r3 (kω) 480,9 14,91 0,1𝐸 − 3 international journal of applied sciences and smart technologies volume 1, issue 1, pages 23–32 issn 2655-8564 31 b. the resistance values of small grains and large grains were smaller than the grain boundaries resistance. c. the value of capacitance-resistance of the small grains and large grains were obtained values that tend to be smaller. references [1] w. d. callister, “material science and engineering-third edition”, john willey and sons, new york, 1994. [2] w. heywang and h. thomann, “tailoring of piezoelectric ceramics,” annual review of materials science, 14, 27−47, 1984. [3] h. lin and wang, “structure and dielectric properties of perovskite–barium titanate (batio3),” san jose state university, 2002. [4] h. y. tian, w. g. luo, a. l. ding, j. choi, c. lee, and k. s. no, “influences of annealing temperature on the optical and structural properties of (ba,sr)tio3 thin films derived from sol-gel technique,” thin solid films, 200−205, 2002. [5] t. hungría, m. algueró, a. b. hungría, and a. castro, “dense, fine-grained ba1xsrxtio3 ceramics prepared by the combination of mechanosynthesized nanopowders and spark plasma sintering,” chemistry of materials, 6205−6212, 2005. [6] w. cao, h. h. cudney, and r. waser, “smart materials and structures,” proceedings of the national academy of sciences of the united states of america, 8330−8331, july 1999. [7] s. sen, r. n. p. choudhary, and p. pramanik, “impedance spectroscopy of ba1–xsrxsn0.15ti0.85o3 ceramics,” british ceramic transactions, 250−256, 2004. [8] k. prabakar, s. k. narayandass, and d. mangalaraj, “impedance and electric modulus analysis of cd0.6zn0.4te thin films,” crystal research and technology, 37 (10), 1094–1103, 2002. [9] d. n. rositawati, “studi pengaruh annealing terhadap keramik barium strontium titanate,” prosiding seminar nasional sains dan pendidikan sains vi, 210−215, juli 2011. international journal of applied sciences and smart technologies volume 1, issue 1, pages 23–32 issn 2655-8564 32 this page intentionally left blank international journal of applied sciences and smart technologies international journal of applied sciences and smart technologies volume 5, issue 1, pages 17–26 p-issn 2655-8564, e-issn 2685-9432 this work is licensed under a creative commons attribution 4.0 international license 17 rfid application for designing lowcost learning device for play and learning to read early braille for blind children bertha bintari w1,* 1department of mechanics design tecnology, vocational faculty, universitas sanata dharma *corresponding author: berthabw@usd.ac.id (received 15-12-2022; revised 21-12-2022; accepted 24-12-2022) abstract it is very important for children who have low vision or visual impairment to be able to recognize braille letters and recognize words. this ability will help them to be able to read and write braille. braille recognition learning methods currently use technology as a learning aid or assistive device. however, assistive devices with this technology are still rare and unaffordable for some schools and students in indonesia. the design of this braille learning aid applies rfid as a low-cost solution for learning braille reading aids. this tool integrates several braille learning methods that have been carried out by teachers when teaching braille recognition in schools. the integrated methods are the mangold method, the fernald method, the flashcards method and the scrabble method. this tool helps children recognize braille letters, arrange them as words and is confirmed directly by the tool through sound if the letters arranged as a word are right or wrong. with this integration method, children are expected to be able to learn to read braille while playing, and learn independently. keywords: learning aids, learning and playing tools, rfid applications, reading braille, low-cost braille 1 introduction braille learning aids for children are needed to help children learn to read beginning braille. this reading learning aid [1] is intended for children who are learning to recognize letters and arrange them as words. in addition to learning, this tool is an interactive play facility that is expected to motivate children to learn to read in a fun way. a fun way to learn is to learn to use games or playing media [2]. learning with the learning while http://creativecommons.org/licenses/by/4.0/ international journal of applied sciences and smart technologies volume 5, issue 1, pages 17–26 p-issn 2655-8564, e-issn 2685-9432 18 playing method is proven to improve children's ability to learn to read braille, scramble method as one example used for this process. the learning method using this tool is conceptualized using a random letter and letter arrangement method which is known as the scrabble method. research by adhitya, [3] which concludes that the scrabble method can improve abilities, namely increasing the score obtained by students until they reach a predetermined success indicator of 70. however, this method can only be successful if it is given to children who are already able to read braille. braille letters will be pasted on the letter cards. this letter card method is referred to as the flash card method. apsari [4] states that braille flashcard media can facilitate children's memory in understanding letters and can improve children's tactile or sense of touch. this designed braille card still requires the interaction of the child to feel it using their fingers to recognize letters. the method used in this touch uses the mangold method or fingering with both hands [5] in addition, according to khairani [6] the fingering method using two hands can improve the ability to recognize and read braille letters [7]. braille the indonesian braille system for the indonesian language section states that "braille letters are letters that are arranged based on a combination of a six-dot pattern arranged as follows with the rules for the distance between points and the height of the peak of the braille code: figure 1. braille letters and standard sizes braille letters have standard rules related to the distance between dots and the height of embossed letters for braille letters printed using a braille typewriter [8]. international journal of applied sciences and smart technologies volume 5, issue 1, pages 17–26 p-issn 2655-8564, e-issn 2685-9432 19 learning method application the scramble method [3] is a form of game to form vocabulary from the letters available and have been scrambled to find answers accompanied by available alternative answers that can increase students' concentration and speed of thinking. ritawati (1996: 51) in [9] states that there are 5 steps in initial reading, namely recognizing sentence elements, recognizing word elements, recognizing letter elements, arranging letters into syllables and assembling syllables into words. early reading teaching is more emphasized on the development of basic reading skills. meanwhile, in reading braille, the initial steps are: mastery of direction, tactile sensitivity, letter identification techniques and line tracing abilities. this ability cannot be easily mastered, [6] for that we need learning media to help teachers generate enthusiasm for students in recognizing letters and words needed to provoke a good response in reading by increasing vocabulary. learning with interactive methods is able to make students have a better interest in learning to read [10]. while in the fernald method; using reading material from the words spoken by the child and each word being taught in its entirety. this method relies on reading braille and speaking aloud. when students say the word, then students will remember words that have similarities to the words they have learned [11]. the manggold method is a way of reading braille by touching it with both hands. touching using both hands reduces errors in crossing braille letters that are close together, reducing rubbing and back and forth [5]. reading using the mangold method is an effective method in learning the beginning of reading braille [12] . the flashcard method refers to the use of cards that have pictures and words as a medium for learning words for children. then the child will memorize based on the pictures and words in the flashcard [4], [6] . in the design of learning aids to read braille, it is necessary to pay attention to matters related to the use of electrical, mechanical and control. these three things are elements of robotics, so the rules regarding the design of tools for children have special guidelines related to the safety of electricity and the use of resources and the wiring network [13]. international journal of applied sciences and smart technologies volume 5, issue 1, pages 17–26 p-issn 2655-8564, e-issn 2685-9432 20 the category of reading learning aids needed by blind students is easy access, portable which includes battery life, light weight, and not thick. in addition, the tool is durable, technically non-malfunctioning, sturdy, easy to use, is an effective learning tool, attracts interest, can be used for various functions and provides direct feedback in the form of audio [14],[15]. 2 rfid application methods in designing braille reading learning aids braille learning reading aid products that already exist are still unaffordable tools due to the high technology used in these aids. the means that can be used to improve the experience of knowledge in visually impaired children are assistive technology. assistive technology is a tool that leads to items, products, items that are modified to be able to provide accommodation for children with special needs, including visual impairments according to bryant, (2012) in handoyo [16] therefore, the design of this tool applies rfid technology which is relatively affordable in terms of cost (low cost) as an assistive technology for the blind. the application of an rfid reader is used as a concept for a braille card reader that has an rfid tag that has been inputted as data in the microcontroller program library. figure 2. braille reading aids schematic international journal of applied sciences and smart technologies volume 5, issue 1, pages 17–26 p-issn 2655-8564, e-issn 2685-9432 21 the rfid tag used is a mifare type card that is used as a flash card by attaching a braille symbol to the back of the casing. semi-conductor system mifare cards, cards and readers for this card use a frequency of 13.56 mhz this card is suitable to be applied to this braille card because the system is capable of recording data [17]. circuit system the card case is designed to cover the rfid card with braille. the tool case is designed to protect the equipment circuit and controls. these rfid cards are then treated as letter cards that are scrambled and then touched and then arranged as a word. the arrangement of the rfid card is then read by the rfid reader. the microcontroller used is arduino uno r3 microcontroller. circuit uses an arduino uno r3 component as a microcontroller, an rfid reader mifare to read rfid tag cards, an i2c lcd to display words composed of braille letters, an mp3 module which contains a voice recording to confirm the arrangement of letters and speakers to amplify the sound from the mp3 module. programming using arduino idee software. figure 3. the main electrical circuit of the prototype the rfid tag reader is driven by a dc motor which is controlled by an rfid input that is read by programming settings. the actuator to activate the function of the tool uses 2 (two) rechargeable 18650 lithium batteries. prototype design concept. the design concept integrates the fernald method, the mangold method, and the flashcard method with a playing system and learning to read international journal of applied sciences and smart technologies volume 5, issue 1, pages 17–26 p-issn 2655-8564, e-issn 2685-9432 22 braille using the sramble method. the fernald method applied is the teacher writes a word using braille and students feel it to read. then students say the word they read. direct word confirmation is done when students say the words they read. word recognition by voice and confirmation of truth and error by voice. the mangold method here uses the tactile method of braille letters found on braille cards. the application of an rfid card as a braille flashcard equipped with a casing with a braille symbol is a flashcard method. this flashcard method does not use one card as one word, but one card as one braille letter. the cards are then shuffled, and students arrange the cards as a word, as a way of playing braille reading. this method is a scramble method. the application of rfid is in the use of rfid cards and rfid readers. cards with braille letters are arranged on a card tray totaling 6 places. in the design of this prototype, the arrangement of letters is limited to 5 letters. the 6th placeholder is where the card stops, as a marker for the rfid reader to finish reading the card. the rfid reader will provide input information to the microcontroller which will confirm the word read is true or false. the confirmation is voiced through the mp3 module which is activated by the microcontroller. determination of matters related to security and success of the function is carried out by several series and program trials. once the circuit is working, then design a case for both the card with braille and the case for the whole device. the rfid reader is driven by a control on the motor driver which is located separately from the casing and protected by the motor casing. cables that stick out are wrapped in a safety cover. the push button is placed in the place closest to the front of the tool and has a diameter of 1.5 cm so that it is easily palpable by the child's hand. braille reading aids as a starting braille reading aid, there are 6 card slots. the card slot for letter card placement is limited to 5 letters and the last slot is as a stop card to command the mp3 module to confirm by voice. 3 results and analysis the reading function by the rfid reader is already functioning properly. the reading distance with the card position can still be read at a distance of 15 cm. the use of international journal of applied sciences and smart technologies volume 5, issue 1, pages 17–26 p-issn 2655-8564, e-issn 2685-9432 23 acrylic as a prototype material is still not optimal because it still feels heavy, and the cutting angle is still a bit sharp. acrylic material is less flexible. figure 4. drawing of braille reading aid design figure 5. the rfid cards of braille for braille learning reading aid the results of the new prototype voiced confirmation of true false, not yet voiced the reading by letter. in the next prototype improvement, it is planned to confirm by voice by letter. card design will be modified so that the companion or teacher can also visually see the card directly even though it has been visualized through the lcd screen. the trials are conducted internally, to test the functionality of the tool. international journal of applied sciences and smart technologies volume 5, issue 1, pages 17–26 p-issn 2655-8564, e-issn 2685-9432 24 figure 6. the rfid braille learning reading aid prototype 4 conclusion prototype this braille reading aid is functionally able to accommodate children learning to understand braille. the rfid card which is equipped with embossed braille is easy to feel well, equipped with a marker so that the card is not placed upside down. the rfid reader runs smoothly in the direction of reading from left to right at a programcontrolled speed. confirmation of letters and words by voice will add value to the success of this tool. with the improvement of the program, this tool is ready to be tested on students who have low vision or are blind. references [1] i. p. a. padma diana, i. g. a. putu raka agung, and p. rahardjo, perancangan modul pembelajaran huruf braille berbasis mikrokontroler untuk membantu proses belajar disabilitas netra, j. spektrum, 5 (1) (2018) 5. [2] c. n. aulina, pengaruh permainan dan penguasaan kosakata terhadap kemampuan membaca permulaan anak usia 5-6 tahun, pedagog. j. pendidik., 1(2) (2012) 131. [3] g. adhitya, peningkatan kemampuan membaca permulaan huruf braille melalui metode scramble pada siswa tunanetra kelas i di slb a yptn mataram, widia ortodidaktika, 6 (2) (2017) 139–148. [4] adinda apsari anindita, pembelajaran braille bermedia flashcard di tklb international journal of applied sciences and smart technologies volume 5, issue 1, pages 17–26 p-issn 2655-8564, e-issn 2685-9432 25 tunanetra, (2020) 1–8. [5] t. maryatun, pengelolaan pembelajaran membaca permulaan tulisan braille melalui sistem mangold pada siswa tunanetra, manajer pendidik., 10 (5) (2016) 502–506. [6] m. khairani, media flashcaed braille terhadap kemammpuan membaca permulaan anak tunanetra, j. pendidik. khusus unesa, (2016) 1–5 [online]. available: https://jurnalmahasiswa.unesa.ac.id/index.php/jurnal-pendidikankhusus/article/download/17862/16152 [7] jumaidi, atmazaki, and h. e. thahar, peningkatan kecepatan efektif membaca tulisan braille dengan teknik dua tangan bagi tunanetra kelas v slb negeri 2 padang, bahasa, sastra dan pembelajaran, 1 (3) (2013) 60–70, [online]. available: http://ejournal.unp.ac.id/index.php/bsp/article/view/5016/3968 [8] l. j. lieberman, teaching students with visual impairments, case study. adapt. phys. educ. empower. crit. think., april, (2019) 140–142, [9] s. anna, m. angelina, and a. budiman, penggunaan metode scramble dalam meningkatkan kemampuan membaca braille bagi siswa tunanetra kelas iii di slbn weri larantuka. [10] d. nusyirwan et al, tepikan (tebak pilihan ikan) menggunakan card tag rfid berbasis arduino uno sebagai media belajar anak sekolah,”simetris j. tek. mesin, elektro dan ilmu komput., 10 (2) (2019) 589–602. [11] sudartiningtyas, penggunaan metode fernald untuk meningkatkan prestasi membaca braille bagi siswa tunanetra kelas ii di slb-a tpa jember semester ii tahun ajaran 2016/2017, speed j. j. spec. educ., 4 (1) (2020) 12–16. [12] lusiana kilen dan ehan, teknik mangold untuk meningkatkan kemampuan membaca permulaan braille pada peserta didik tunanetra, 19 (1) (2018) 25–31, [online]. available: https://www.mendeley.com/catalogue/cf23404b-8e35-30868f6468890c7c70ab/?utm_source=desktop&utm_medium=1.19.8&utm_campaign =open_catalog&userdocumentid=%7b417546a2-31d9-4e4e-af0a 5c0680c8574c%7d [13] s. chiasson and c. gutwin, design principles for children ’ s technology, (2005). [14] e. r. hoskin, development and design of braillebunny : a device for braille international journal of applied sciences and smart technologies volume 5, issue 1, pages 17–26 p-issn 2655-8564, e-issn 2685-9432 26 literacy education, hoskin, elizab. robin. queen’s univ. proquest diss. publ. (2019) 28389350. [15] s. g. mouroutsos and e. mitka, a guide to safety standards of toy-robots, 2012 ieee/rsj int. conf. intell. robot. syst., no. may, (2012), [online]. available: http://www.researchgate.net/publication/236031868_a_guide_to_safety_standard s_of_toy-robots [16] r. r. handoyo, pengembangan bahan ajar kode braille berbasis teori elaborasi bagi guru pendidikan khusus, jpk (jurnal pendidik. khusus), 14 (2) (2019) 46–56. [17] h. djamal, radio frequency identification (rfid) dan aplikasinya, tesla j. tek. elektro, 16 (1) (2014) 45–55. international journal of applied sciences and smart technologies volume 4, issue 2, pages 185 194 p-issn 2655-8564, e-issn 2685-9432 185 this work is licensed under a creative commons attribution 4.0 international license the experiment of wind electric water pumping for salt farmers in remote area of demak-indonesia s. dio zevalukito1, yb. lukiyanto 1, * and f. risky prayogo 1 1department of mechanical engineering, sanata dharma university, yogyakarta, republic of indonesia *corresponding author: lukiyanto@usd.ac.id (received 09-11-2022; revised 14-11-2022; accepted 29-11-2022) abstract local villagers in the remote area of wedung, demak, northern of central java, indonesia utilized special equipment called wind-pump for sea water lifting and circulating on salt production processes. the salt farmer has skill to produce, manufacture and maintain his own traditional wind-pump. in the windpump structure unit consisted of four blades horizontal axis windmill and reciprocating pump. this experiment study separated both windmill and pump by 50 meters. the pump was low speed centrifugal type pump. the windmill shaft was connected electrically to the pump shaft. the electric transmission components were an ac generator, diodes circuits and an dc motor. the experiment is carried out in a place that has the same characteristics as the original place at southern region of bantul, yogyakarta, indonesia. the variation of experiment was pump head of 55 cm and 85 cm during 6 hours each variation. the average wind speed at the time of data collection at the head of 55 and 85 cm were 3.9 and 3.8 m/s. the volume flow rate and the volume produced by the pump during 6 hours of operation were 0.134 and 0.215 liter/s and 2901.6 and 4640.6 liters. http://creativecommons.org/licenses/by/4.0/ international journal of applied sciences and smart technologies volume 4, issue 2, pages 173 184 p-issn 2655-8564, e-issn 2685-9432 186 keywords: wind-pump, salt production processes, four blades, electric transmission 1 introduction salt production in indonesia is still using a traditional technique. this technique applies solar heat and solar radiation to evaporate seawater [1]. the salt production process is usually in a dry season. this process is the easiest and cheapest method. typically, the salt ponds are spread across the coastline. the salt ponds are plentiful on the north coast of java-indonesia. wind energy for water pumping is spread worldwide, especially in developing countries [3]. the application of renewable energy for water pumping still exists. the traditional methods use windpump and the modern methods, such as photovoltaic and wind turbine technologies for water pumping [4]. especially in remote areas, salt farmers do not have electricity or fuel to power the pumps [5]. the salt farmers respond to their issues and built the windmills [2,5]. this windmill is used to assist salt farmers in salt production. the salt farmers from demak use the windmill for water pumping [2,5]. commonly, salt farmers use a reciprocating pump as an instrument for transporting water [2]. this traditional technique is usually called windpump. there are two types of purposes for this windpump. the first type was built to transport water from the sea to salt ponds, and the second type was built to transport water from one pond to another pond [2]. the purpose of this study is to discuss salt farmers’ windmills of demak for water pumping with a slow-speed centrifugal pump. there are two variations on the net head on this research, there are 55.00 cm and 85.00 cm. the dissimilarity on the net head could expand other assistances for salt farmers. the results are windspeed to volume flow rate, efficiency and volume produced in a day. the study purpose is to investigate the performance of four blades salt farmer’s windmill from demak-indonesia for water pumping coupled with a centrifugal pump, with two variations of the water level (55.00 cm and 85.00 cm). velasco et al. [6] explained the theory of wind-electric water pumping. zevalukito and lukiyanto [7] international journal of applied sciences and smart technologies volume 4, issue 2, pages 173 184 p-issn 2655-8564, e-issn 2685-9432 187 investigated the salt farmer’s windmill from demak for water pumping with a low-speed centrifugal pump. the transmissions utilize electric transmissions, with electric generators and electric motors. iswanjono et al. [8] investigated the performance of transmissions with some type of generator, motor, and cable. lukiyanto and wahisbullah [9] examined the double u pipe configuration for a centrifugal pump. the double u pipe arrangements aim to escalate efficiency because double u pipe configurations have lower losses than a sliding orifice. site and data description. the data collection process has completed at kuwaru beach (bantul, special region of yogyakarta, indonesia). the windpump system was installed about 50 m from the coastline. the windmill was fitted on a beach and the centrifugal pump was arranged 50 m away from the windmill. there was no vast difference in the wind speed range between kuwaru beach and demak’s coastline. the place has uncontrollable wind speed by the reason of the natural conditions of the coastline. the data collection was started in the morning until the afternoon on local time (gmt+07.00). the wind speed was ranging between 2.2 up to 3.9 m/s. the information required was wind speed and the windmill’s shaft speed on the windmill, and the volume flow rate of the centrifugal pump. the data informations were recorded every 6 minutes. 2 research methodology fig. 1. illustrates the wind pumping system. the windmill’s shaft attached with the ac generator’s shaft subsequently produces ac electricity. dc electricity is produced as a result of ac electricity transform into dc electricity by a diodes circuit (wheatstone bridge). the dc electricity is linked with the dc electric motor. the dc electric motor shaft is coupled with a low-speed centrifugal pump. when the low-speed centrifugal pump rotates, then the water is supplied with centrifugal force. the centrifugal force pushes the water upward. international journal of applied sciences and smart technologies volume 4, issue 2, pages 173 184 p-issn 2655-8564, e-issn 2685-9432 188 figure 1. schematics of wind pumping system this wind pumping system in this study has three principal parts shown in fig. 1. there are windmills [5], a transmissions system [8], and a centrifugal pump [9]. in this study, the windpump system was used an ac generator and dc electric motor as electric transmissions. fig. 2. shows energy losses during energy converting. figure 2. energy losses of the system windmill (system a). the windmill used in this study purchased directly from one of the salt farmers from demak-indonesia. the windmill is arranged with four blades and maybe rearranged with two blades [5]. the windmills have dimensions of 200.00 cm diameter, 22.25 cm wide, and 2.50 cm width. the windmill is attached to a shaft with dimensions of 2.00 cm in diameter and 25.00 cm in length. the shaft is fitted horizontally between the two ucf 204 bearings. the wooden tower was used to install the windmill. international journal of applied sciences and smart technologies volume 4, issue 2, pages 173 184 p-issn 2655-8564, e-issn 2685-9432 189 usually, the wooden tower is instantly installed above the pond’s fence. the wooden tower is attached to the rigid structure due to research purposes. the performance of windmills was examined by zevalukito et al. [5]. the windmill has a coefficient of performance (cp) of 10.2% and has maximum torque (t) of 1.90 nm. the maximum wind energy captured by this windmill is 10.2% could be converted to mechanical shaft power. the mechanical shaft power rotates the generator which is coupled with the ac generator’s shaft. transmission system (system b). the electric transmissions require some electrical equipment. the transmissions system arrangement in the sequence was an alternating current generator, a diodes circuits, a wire, and a direct current electric motor. the windmill’s shaft was coupled to the generator’s shaft. the electric generator was 500 watt alternating current brushless permanent magnet. the electricity production by the generator was alternating current (ac). the ac electric current was transformed to direct current (dc) by diodes circuits [7-8]. the diodes circuit is well-known as the wheatstone bridge. the dc electricity is transferred by the wire 50 m long. the wire connected to the dc electric motor. the dc electric motor has 450 watt with a permanent magnet. the dc electric motor’s shaft is attached with the low-speed centrifugal pump tube afterwards rotating the pump. low-speed centrifugal pump (system c). the centrifugal pump in this study was a low-speed centrifugal pump coupled with dc electric motor’s shaft. the centrifugal pump uses a double u configuration as an impeller [6,9]. the centrifugal pump’s shaft was connected to dc electric motor’s shaft. the suction pipe, two long-arm on two sides, and double u pipes configurations are the three main parts of the centrifugal pump [9]. the centrifugal pump has 110.00 cm in diameter. the height between suction parts and double u pipe are 55.00 cm, 85.00 cm, and 115.00 cm. the suction part could be replaced for each requirement. the centrifugal pump is mounted below the rigid structure. the protective cover is installed around the centrifugal pump to prevent the water splashed everywhere. the system efficiency in this wind-electric water pumping assessed from wind power (system a) to hydraulic power (system c). international journal of applied sciences and smart technologies volume 4, issue 2, pages 173 184 p-issn 2655-8564, e-issn 2685-9432 190 the energy conversion of the wind electric water pumping begins with wind energy [11]: 𝑃𝑊𝑖𝑛𝑑 = 0.5𝜌𝑎𝑖𝑟 𝐴𝑣 3 (1) 𝑃𝑊𝑖𝑛𝑑 as input power; 𝜌𝑎𝑖𝑟 constant at 1.225 𝑘𝑔/𝑚 3; a as sweep area of the windmill, remain constant at 3.14𝑚2; v as wind speed. the windmill system is shown in fig. 2 of system a. the energy conversion ends in the low-speed centrifugal pump while the pump starts pumping. the hydraulic power-driven by the wind speed. the wind speed changes into the volume flow rate due to this windpump system. the hydraulic power could be written as [12]: 𝑃𝐻𝑦𝑑𝑟𝑎𝑢𝑙𝑖𝑐 = 𝜌𝑊𝑎𝑡𝑒𝑟 𝑔ℎ𝑄 (2) 𝑃𝐻𝑦𝑑𝑟𝑎𝑢𝑙𝑖𝑐 as output power; 𝜌𝑎𝑖𝑟 persist constant at 1.03 kg/litre; g as acceleration of gravity, steady at 9.81 m/s2; h as the net head pump, there are 55.00 cm, 85.00 cm and 115.00 cm; and q as volume flow rate. the system efficiency assess from input power (1) to output power (2). the equation to assess efficiency could be written as: 𝜂 = 𝑃𝑊𝑖𝑛𝑑 𝑃𝐻𝑦𝑑𝑟𝑎𝑢𝑙𝑖𝑐 × 100% (3) as a result of this windpump system for developing salt farmers community in demakindonesia, the volume in a day is certainly required, since the salt farmers used to gain advantages from the windmills. the wind speed and volume flow rate recorded every six minutes are assumed to remain constant. 3 results and discussions fig. 3 shows the relation between windspeed captured and volume flow rate produced by the wind pumping system. the wind speed available during the data collection process ranged between 2.20 m/s up to 3.90 m/s. the highest flow rate produced by the wind pumping system is windpump with 85.00 cm of the net head, followed by the 55.00 cm of windpump’s net head. the highest wind energy captured by the wind pumping system could increase the volume produced by the pump. the average wind speed captured for international journal of applied sciences and smart technologies volume 4, issue 2, pages 173 184 p-issn 2655-8564, e-issn 2685-9432 191 each variation for 55.00 cm and 85.00 cm is 3.10 m/s and 3.30 m/s, respectively. the wind pumping system with 85.00 cm of the net head has the greatest average wind speed and the 55.00 cm has the lowest, affecting the wind pumping system with 85.00 cm of net head producing more energy than 55.00 cm. the wind pumping system could drive the centrifugal pump if the windmill captured 2.5 m/s of a wind speed and up. figure 3. relation of windspeed and volume flow rate of the wind pumping system fig. 4 shows the volume produced in a day as a result of the wind pumping system. the volume produced every 6 minutes is assumed to remain constant. the wind pumping system with 85.00 cm of the net head has the highest volume produced of 4640.58 liters, the net head of 55.00 cm produced volume of 2901.60 liters. the information has been recorded in fig. 3 shows that the wind pumping system with 85.00 cm of the net head has the greatest average wind speed of 3.30 m/s produced 4640.58 liters of water. although, the 55.00 cm has the lowest average wind speed of 3.10 m/s produced 2901.60 liters of water. the wind energy captured by the wind pumping system with 85.00 cm of net head 0,00 0,05 0,10 0,15 0,20 0,25 0,30 2,2 2,7 3,2 3,7 4,2 v o lu m e f lo w r a te ( l/ s) wind speed (m/s) 55 cm 85 cm international journal of applied sciences and smart technologies volume 4, issue 2, pages 173 184 p-issn 2655-8564, e-issn 2685-9432 192 is more than 55.00 cm. the consequence are the centrifugal pump with 85.00 cm of net head produce more water than 55.00 cm of net head. figure 4. volume produced in a day there remains a substantial energy loss in the system which is shown in fig. 2. the insufficiency of measuring types of equipment became essential during the data collection process. the calculation of system efficiency for each net head merely calculate the average efficiency. the average efficiency of the wind pumping system with 55.00 cm and 85.00 cm are 1.32% and 3.71%, respectively. 4 conclusions the experimental study of a wind pumping system with tree variations of the net head was complete. the wind pumping system with 55.00 cm and 85.00 cm of the net head has produced volume flow rate at maximum wind speed are 0.24 liter/s at 3.9 m/s and 0.28 liter/s at 3.8 m/s. the volume produced in a day are 2901.60 liters and 4640.58 liters, 0 1000 2000 3000 4000 5000 9:06 10:06 11:06 12:06 13:06 14:06 v o lu m e ( li tr e ) time (gmt+07.00) 55 cm head 85 cm head international journal of applied sciences and smart technologies volume 4, issue 2, pages 173 184 p-issn 2655-8564, e-issn 2685-9432 193 respectively. the total average efficiency of the wind pumping system for 85.00 cm and 55.00 cm are 3.71% and 1.32%, respectively. references [1] a. . rositawati, c. . taslim, and d. soetrisnanto, “rekristalisasi garam rakyat dari daerah demak untuk mencapai sni garam industri,” j. teknol. kim. dan ind., 2(4), pp. 217–225, 2013. [2] s. d. c. deo, s. d. zevalukito, and y. lukiyanto, “indonesian traditional windmill of demak, central java for water pumping in traditional salt production,” j. phys. conf. ser., 1511, 012117, 2020. [3] l. mangialardi and g. mantriota, “continuously variable transmissions with torque-sensing regulators in waterpumping windmills,” renew. energy, 4(7), 807– 823, 1994. [4] p. e. campana, h. li, and j. yan, “techno-economic feasibility of the irrigation system for the grassland and farmland conservation in china: photovoltaic vs. wind power water pumping,” energy convers. manag., 103, 311–320, 2015. [5] s. d. zevalukito, y. b. lukiyanto, and d. p. utomo, “two and four blades windmill characteristics of traditional salt farmers from demak region,” in icimece 2019, 2020, 1–5. [6] m. velasco, o. probst, and s. acevedo, “theory of wind-electric water pumping,” renew. energy, 29(6), 873–893, 2004. [7] s. d. zevalukito and y. b. lukiyanto, “the performance of salt farmer windmill from demak for water pumping with a low-speed centrifugal pump,” j. phys. conf. ser., 1724, 1–8, 2021. [8] iswanjono, y. b. lukiyanto, b. setyahandana, and rines, “a couple of generator and motor as electric transmission system of a driving shaft to long distance driven shaft,” e3s web conf., 67,1–4, 2018. [9] y. b. lukiyanto and e. wahisbullah, “a simple double u pipe configuration to improve performance of a large-diameter slow-speed centrifugal impeller,” in international journal of applied sciences and smart technologies volume 4, issue 2, pages 173 184 p-issn 2655-8564, e-issn 2685-9432 194 proceedings of the 3rd applied science for technology application, astechnova 2014, 2014. [10] b. setyahandana, y. b. lukiyanto, and rines, “pipes outlet directions and diameter of double u pipes configuration on centrifugal reaction pump,” proceeding 15th int. conf. qir (quality res.), 324–332, 2017. [11] t. m. letcher, "wind energy engineering: a handbook for onshore and offshore wind turbines", chennai: joe hayton, 2017. [12] y. a. cengel and j. m. cimbala, "fluid mechanics fundamentals and application, 1st ed", new york: mcgraw-hill, 2006. international journal of applied sciences and smart technologies volume 5, issue 1, pages 101-112 p-issn 2655-8564, e-issn 2685-9432 101 this work is licensed under a creative commons attribution 4.0 international license a brief on optical-based investigation towards the interfacial behaviors during high viscous liquid/gas countercurrent two-phase flow in a complex conduit representing 1/30 downscaled of pwr hot leg geometry achilleus hermawan astyanto1, *, indarto2, deendarlianto2 1department of mechanical engineering, universitas sanata dharma, kampus iii usd maguwoharjo, yogyakarta 55282, indonesia 2department of mechanical and industrial engineering, universitas gadjah mada, jalan grafika no 2 kampus ugm, yogyakarta 55281, indonesia *corresponding author: achil.herma@usd.ac.id (received 28-04-2023; revised 04-05-2023; accepted 04-05-2023) abstract the present work briefly investigates liquid/gas countercurrent two-phase flow phenomena which can be specifically found in a mitigation during an accidental scenario in the operation of a nuclear reactor. a comprehensive knowledge on the corresponding phenomena is obviously important to avoid the failure on the cooling mechanism. here, a pair of fluid containing high viscous liquid/gas flows through a complex conduit representing 1/30 scaleddown of pwr hot leg’s typical geometry. furthermore, the flow structures were visually observed, while the film thicknesses are extracted by an image processing algorithm through the corresponding optical-tabulated data. the obtained results reveal that a rather sharp decrease in liquid film corresponds to the flow regime transition. keywords: countercurrent two-phase flow, liquid film thickness, high viscous liquid, complex conduit 1 introduction a gas/liquid countercurrent two-phase flow can be found in a mitigation of an accident on the operational of nuclear power plants. here, it is strongly recommended that comprehensive knowledge on the corresponding phenomena is obviously important to avoid the failure on cooling mechanism during the scenario of loss of coolant accident because of a small break (sbloca) in the primary circuit of pressurized water reactors http://creativecommons.org/licenses/by/4.0/ international journal of applied sciences and smart technologies volume 5, issue 1, pages 101-112 p-issn 2655-8564, e-issn 2685-9432 102 (pwr). furthermore, the database on the basis of analytical, mathematical, numerical and experimental investigations is able to elaborate the comprehensive behaviors of the corresponding flow phenomena [1]. various studies have been carried out to investigate the characteristics of the flow structures during countercurrent two-phase flow. visual observations are widely reported during the investigation on the basis of either large experimental works [2–5] or computational fluid dynamics [6]. subsequently, measurement methods have also been developed to support the corresponding studies. here, the studies on the pressure fluctuations were reported by several researches [7, 8]. furthermore, the development of measurement methods has been conducted through physical measurements and also measurements on the basis of optical data. the concepts of conductance, capacitance and also impendence were largely reported as a particular technique during the measurements on the basis of electronic conceptions [9–13]. however, the characteristics of the flow regimes can be further obtained from the measurement on the basis of both the pressures and mainly interfacial behaviors since they strongly correspond to the fraction either the liquid or the gaseous phase, namely void fraction or liquid holdup, respectively. on the other hand, the development of the measurements on the basis of optical data were introduced as well as the image processing techniques have been largely reported able to support the algorithm of object detections [14]. therefore, since the interface between the phases during two-phase flow is defined as particular object, namely edge, the method may be used to support the description of a flooding which is initiated when the stability of the countercurrent flow cannot be maintained. furthermore, the corresponding technique enables obtaining statistical characteristics of the countercurrent flow structures to be further elaborated as a regime identification method [15]. here, various studies were conducted in both straight and complex geometries representing pwr hot leg’s typical geometry. moreover, the corresponding development of measurement methods was then applied to investigate the effects of both geometry and fluid properties toward the characteristics of countercurrent flow. from the aforementioned literature surveys, it is strongly implied that the interfacial behaviors during the visualization may establish practical advantages during the investigation on countercurrent flow. a further challenge to develop measurement international journal of applied sciences and smart technologies volume 5, issue 1, pages 101-112 p-issn 2655-8564, e-issn 2685-9432 103 techniques utilizing the optical data provides another opportunity to avoid such intrusive manners of conventional-physical measurements in which the probes should be contacted to the working fluids. therefore, the present work elaborates the investigation of countercurrent flow phenomena on the basis of visual-optical data through the characteristics of liquid film thickness representing the interfacial behaviors. 2 methods fig. 1 schematically depicts the experimental apparatus. the main components of the constructed facility which mainly represents a small scale of a primary circuit of pwr comprises simulators of a hot leg, a reactor pressure vessel (rpv) and a steam generator (sg). in an accordance with the supply systems of working fluids, a centrifugal water pump and a reciprocating air compressor are utilized to circulate the liquid and gas, respectively. table 1 provides the information of physical properties of the tested fluids. in order to increase the dynamic viscosity of the liquid, distilled water is mixed with glycerol in a percentage of 40% volume to total volume of the solution. on the other hand, to support the visual observations, a high speed video camera was utilized at 240 frame per second of recording rate. table 1. physical properties of the tested fluids properties gas liquid density (kg/m3) 1.15 1058.13 dynamic viscosity (kg/m. s) 1.87×10-5 31.67×10-4 surface tension (n/m) 0.064 international journal of applied sciences and smart technologies volume 5, issue 1, pages 101-112 p-issn 2655-8564, e-issn 2685-9432 104 figure 1. the scheme of experimental facility international journal of applied sciences and smart technologies volume 5, issue 1, pages 101-112 p-issn 2655-8564, e-issn 2685-9432 105 in the present work, the flow parameters which comprise the liquid and gas flow rates were varied. for a constant flow rate of the liquid, the gas flow rate was increased stepwise by an increment of 5 liters per minute (lpm). on the other hand, when the gas flow rate is kept constant, the liquid flow rate was increased stepwise by an increment of 2 gallons per hour (gph). in addition, a rather detailed description regarding both the experimental apparatus and procedures can be found in astyanto et al. [13]. qg (lpm) visual 10 20 30 figure 2. a typical set of visualization at the flow condition of ql= 24 gph with respect to the change of gas flow rate. 3 results and discussions fig. 2 shows a typical visualization of the countercurrent flow under a constant liquid flow rate, ql= 24 gph, with respect to the change of gas flow rate. from the figure it can be clearly seen that under this flow condition, a stratified structure is visually observed with a relatively stable interface on the lower gas flow rate. an inertial liquid gas hj subcritical region supercritical region film thickening international journal of applied sciences and smart technologies volume 5, issue 1, pages 101-112 p-issn 2655-8564, e-issn 2685-9432 106 dominated flow, namely supercritical flow, is observed in the inclined section and half of the elbow, while another gravitational dominated flow, i.e. subcritical flow, is observed in almost the entire horizontal section. a hydraulic jump (hj) is further established as a flow transition from a supercritical to subcritical region. here the liquid is accelerated by the presence of both riser’s elevation and bended geometry. moreover, as the gas flow rate is increased by a small increment, small waves are formed. they propagate along the direction of gas flow. as a result, a wavier interface followed by thickening film layer around the upper end of the hydraulic jump is further observed as the gas flow rate increases. ql (gph) visual 16 20 24 figure 3. another typical set of flow visualization at the flow condition of qg= 30 lpm with respect to the change of liquid flow rate. on the other hand, fig. 3 depicts another typical visualization of the countercurrent flow under a constant liquid flow rate, qg= 30 lpm, with respect to the change of the liquid flow rate. from the figure it can be seen that under this flow condition, a stratified liquid gas hj subcritical region supercritical region film thickening international journal of applied sciences and smart technologies volume 5, issue 1, pages 101-112 p-issn 2655-8564, e-issn 2685-9432 107 structure is also established with a rather fluctuate interface than the previous on the lower liquid flow rate when the liquid flow rate is kept constant. a supercritical flow is observed in the inclined section and half of the elbow, while a subcritical flow is observed in almost the entire horizontal section. since in the beginning a lower liquid flow rate is applied here, a hydraulic jump with a shorter length is observed as its transition as the liquid is accelerated. furthermore, as the liquid flow rate is increased, small waves are formed and propagate along the direction of gas flow. similarly, a wavier interface with a thickening layer around the upper end of hydraulic jump is further observed as the liquid flow rate increases. fig. 4 shows the effect of gas flow rate towards the average of normalized film thickness, δ/d, under the flow condition of ql= 24 gph at several measurement coordinates (locus; l5, l6 and l7) of the subcritical region. from the figure it can be seen that δ/d slightly increases then decreases as the gas flow rate increases. from qg= 0 – 15 lpm, δ/d slightly increases in which the measurement location nearest the elbow obtains the thickest liquid film. subsequently, from qg= 15 – 20 lpm l5, l6 and l7 exhibit an equal δ/d, whereas from qg= 20 – 35 lpm, δ/d decreases in which the measurement location nearest the elbow exhibits the sharpest slope. figure 4. the effect of gas flow rate towards the average of normalized film thickness, δ/d under the flow condition of ql= 24 gph onset of flooding international journal of applied sciences and smart technologies volume 5, issue 1, pages 101-112 p-issn 2655-8564, e-issn 2685-9432 108 on the other hand, fig. 5 describes the effect of liquid flow rate towards δ/d under the flow condition of qg= 30 gph. from the figure it can be seen that that δ/d tends to increase as the gas flow rate increases. here, from qg= 14 – 20 gph, δ/d increases in which the measurement location nearest the elbow obtains the thickest liquid film. subsequently, from qg= 20 – 24 lpm, δ/d increases then decreases. in addition, from qg= 16 – 24 gph, l5, l6 and l7 obtain an almost equal δ/d. from qg= 24 – 26 lpm, δ/d decreases. figure 5. the effect of gas flow rate towards the average of normalized film thickness, δ/d under the flow condition of qg= 30 lpm from figs. 5-6, it can be inferred that the film thickness tends to fluctuate as the fluid flow rate increases at the subcritical region. in addition, from the visualization at the flow condition of both ql= 24 gph, qg= 35 lpm and also qg= 30 lpm, ql= 26 lpm, the limit of stability of the countercurrent flow is reached, and flooding occurs. here the explanation is that as the fluid flow rate increases, the liquid level around the upper end of hydraulic jump increases. the following phenomena causes a narrow area for gas to pass through. based on the momentum balance equation of horizontal countercurrent twophase flow, the relative velocity between the phase increases. as a result, the drag force increases and causes the liquid blocks the entire cross sectional area of the conduit, and onset of flooding international journal of applied sciences and smart technologies volume 5, issue 1, pages 101-112 p-issn 2655-8564, e-issn 2685-9432 109 initiates flooding followed by churn. therefore, a rather sharp decrease of δ/d is further reported to correspond to the transition from a stratified to flooding regime [16]. 4 conclusions the behaviors of interface during gas/liquid countercurrent two-phase flow phenomena was briefly investigated. here, a pair of fluid containing high viscous liquid/gas flows through a complex conduit representing 1/30 scaled-down of pwr hot leg’s typical geometry. the flow structures were visually observed, while the film thicknesses are extracted by an image processing algorithm through the tabulated optical data. the obtained results reveal that a rather sharp decrease on liquid film thickness corresponds to the transition from a stratified to flooding regime. acknowledgements the author thanks to dr. apip badarudin, dr. ignb. catrawedarma, dr. setya wijayanta, akhlisa nadiantya aji nugroho, m.eng., and also all members of multiphase flow research group, laboratory of fluid mechanics and heat transfers, department of mechanical and industrial engineering, universitas gadjah mada for several brief discussions during either the facility installation or particular analyses. moreover, during a better understanding on the flow phenomenology, a phantom miro m310 lab high speed camera was utilized during the experiments. therefore, the author also acknowledges pt. chevron indonesia for the proper opportunity. references [1] deendarlianto, t. höhne, d. lucas, k. vierow, gas-liquid two-phase flow in a pwr hot leg: a comprehensive research review, nuclear engineering and design 243 (2) (2012) 214–233. [2] deendarlianto, c. vallée, d. lucas, m. beyer, h. pietruske, h. carl, erratum: experimental study on the air/water counter-current flow limitation in a model of the hot leg of a pressurized water reactor, nuclear engineering and design 241(8) (2011) 3359–3372. international journal of applied sciences and smart technologies volume 5, issue 1, pages 101-112 p-issn 2655-8564, e-issn 2685-9432 110 [3] a. badarudin, s. t. pinindriya, y. v. yoanita, m. s. hadipranoto, s. hartono, r. ariawan, indarto, deendarlianto, the effect of horizontal pipe length to the onset of flooding position on the air-water counter current two-phase flow in a 1/30 scale of pressurized water reactor (pwr), aip conference proceedings 2001 (2018) 03001. [4] a. badarudin, indarto, deendarlianto, a. setyawan, characteristics of the air-water counter current two-phase flow in a 1/30 scale of pressurized water reactor (pwr): interfacial behavior and ccfl data, aip conference proceedings 1737 (2016) 040015. [5] a. badarudin, a. setyawan, o. dinaryanto, a. widyatama, indarto, deendarlianto, interfacial behavior of the air-water counter-current two-phase flow in a 1/30 scaledown of pressurized water reactor (pwr) hot leg, annals of nuclear energy 116 (2018) 376–387. [6] deendarlianto, t. höhne, d. lucas, c. vallée, g. a. m. zabala, cfd studies on the phenomena around counter-current flow limitations of gas/liquid two-phase flow in a model of a pwr hot leg, nuclear engineering and design 241(12) (2011) 5138– 5148. [7] a. h. astyanto, y. rahman, a. y. a. medha, indarto, deendarlianto, time-series differential pressure fluctuations of a flooding regime : a preliminary experimental results investigation on a 1/30 down-scaled pwr hot leg geometry, aip conference proceedings 2403 (2021) 060001. [8] a. h. astyanto, y. rahman, a. y. a. medha, deendarlianto, indarto, pengaruh rasio i/d terhadap permulaan flooding dan fluktuasi voltase sinyal tekanan rezim flooding pada geometri kompleks, rekayasa mesin 12 (2021) 447–457. [9] deendarlianto, a. ousaka, indarto, a. kariyasaki, d. lucas, k. vierow, c. vallee, k. hogan, the effects of surface tension on flooding in counter-current two-phase flow in an inclined tube, experimental thermal and fluid science 34(7) (2010) 813–826. [10] deendarlianto, a. ousaka, a. kariyasaki, t. fukano, investigation of liquid film behavior at the onset of flooding during adiabatic counter-current air-water twophase flow in an inclined pipe, nuclear engineering and design 235(21) (2005) 2281–2294. international journal of applied sciences and smart technologies volume 5, issue 1, pages 101-112 p-issn 2655-8564, e-issn 2685-9432 111 [11] deendarlianto, a. ousaka, a. kariyasaki, t. fukano, m. konishi, the effects of surface tension on the flow pattern and counter-current flow limitation (ccfl) in gas-liquid two-phase flow in an inclined pipe, japanese journal of multiphase flow 18 (4) (2004) 337–350. [12] a. ihsan, a. h. astyanto, indarto, deendarlianto, kajian eksperimental karakteristik perilaku antarmuka aliran berlawanan arah di geometri 1:30 hot leg pwr menggunakan sensor kawat sejajar, prosiding seminar nasional multidisiplin ilmu universitas respati yogyakarta, 3 (1) (2021). [13] a. h. astyanto, j. a. e. pramono, i. g. n. b. catrawedarma, deendarlianto, indarto, statistical characterization of liquid film fluctuations during gas-liquid two-phase counter-current flow in a 1/30 scaled-down test facility of a pressurized water reactor (pwr) hot leg, annals of nuclear energy 172 (2022) 109065. [14] a. ghoshal, a. aspat, e. lemos, opencv image processing for ai pet robot, international journal of applied sciences and smart technologies, vol 3 issue 1 (2020) 65 – 82. [15] a. h. astyanto, a. n. a. nugroho, indarto, i. g. n. b. catrawedarma, d. lucas, deendarlianto, statistical characterization of the interfacial behavior captured by a novel image processing algorithm during the gas/liquid counter-current two-phase flow in a 1/3 scaled down of pwr hot leg, nuclear engineering and design 404 (2023) 112179. [16] a. h. astyanto, indarto, k. v. kirkland, deendarlianto, an experimental study on the effect of liquid properties on the counter-current flow limitation ( ccfl ) during gas/liquid counter-current two-phase flow in a 1/30 scaled-down of pressurized water reactor ( pwr ) hot leg geometry, nuclear engineering and design 399 (2) (2022) 112052. international journal of applied sciences and smart technologies volume 5, issue 1, pages 101-112 p-issn 2655-8564, e-issn 2685-9432 112 this page intentionally left blank international journal of applied sciences and smart technologies international journal of applied sciences and smart technologies volume 1, issue 2, pages 101-112 p-issn 2655-8564, e-issn 2685-9432 101 on the synthesis of a linear quadratic controller for a quadcopter hendra g. harno department of aerospace and software engineering, gyeongsang national university, jinju 52828, republic of korea corresponding author: h.g.harno@gmail.com (received 31-05-2019; revised 17-10-2019; accepted 17-10-2019) abstract this paper discusses about synthesizing a state-feedback controller for a quad copter based on an optimal linear quadratic control method. the resulting light control system enables the quad copter to maintain stability and to track a reference input. the solution to this control problem involves solving an algebraic riccati equation. the reference-input tracking capability is simulated to show the capability of the quadcopter flight control system. keywords: flight control, quadcopter, linear quadratic control, tracking. 1 introduction versatility of a quad copter has been celebrated by different communities, including, but not limited to, hobbyists, entrepreneurs, medical officers, defence forces, engineers and scientists. it has been used for various purposes (both civilian and military) that can benefit from the quadcopter as a flying vehicle. this is realizable because the quad copter is relatively easier to operate as compared to a full-scale conventional rotorcraft. moreover, users also do not need a runway, an airport or a helipad to operate the quadcopter. in some applications, the quadcopter can even be controlled remotely in a international journal of applied sciences and smart technologies volume 1, issue 2, pages 101-112 p-issn 2655-8564, e-issn 2685-9432 102 cost-effective manner to accomplish particular missions. it is also true that the quad copter has a relatively simple design with an uncomplicated structure. thus, operational and maintenance costs pertaining to quadcopter operations tend to be low. all these features have indeed signify the merit of the quadcopter for a wider usage nowadays and in the future [1, 2]. these advantages will be more meaningful for the users if the quadcopter carrying payloads is equipped with appropriate flight control, instrumentation and communication systems. the flight control system has particularly served as an indispensable part that enables the quadcopter to perform various maneuvers in its operation [3]. thus, we will only discuss about synthesizing a linear controller to enable the quadcopter to fly properly. there are different sorts of control methods that can be applied to develop the flight control system such as pid control, or linear quadratic control, control, sliding mode control and adaptive control (see e.g. [4, 5, 6]). to construct the linear controller for the quad copter, a linear time-invariant state space model is used to represent the quad copter dynamics at a chosen trim condition (equilibrium point). in this case, the linear model was derived through the taylor series expansion as presented in [7]. parameters of this model were identified and validated based on the comprehensive identification from frequency responses (cifer) method [8]. this system identification method is well known as one that is able to yield a representative linear model for synthesizing a flight controller. another system identification method that is also suitable for unmanned flying vehicles is referred to as the modeling for flight simulation and control analysis (mosca) method [9]. in this paper, the linear quadratic control method is applied to construct a state feedback controller for the quadcopter. this controller is obtained by minimizing a linear quadratic cost function that is subject to the quadcopter linear dynamics [10]. it is the nassumed that information about all state variables of the quad copter is available for feedback control. the aims of applying this controller are to stabilize the quadcopter at the equilibrium point and also to allow the quad copter to track a reference input [11]. an example based on the quad copter model for hovering flight [7] is presented to illustrate the performance of the resulting linear quadratic controller. international journal of applied sciences and smart technologies volume 1, issue 2, pages 101-112 p-issn 2655-8564, e-issn 2685-9432 103 the rest of this paper is organized as follows. section 2 presents the equations of motion of the quadcopter underlying the derivation of the linear state-space model used for synthesizing the linear quadratic controller. section 3 shows an example about applying the optimal linear quadratic control method to synthesize the stabilizing controller for the quadcopter. finally, concluding remarks are presented in section 4. 2 problem formulation 2.1 equations of motion a quadcopter is commonly considered as an aircraft which can move freely in six degrees of freedom within an air space. thus, during its flight, the quadcopter is capable of simultaneously performing translational and rotational motions driven by external forces and torques/moments, respectively. to properly utilize the quadcopter for practical applications, it is then necessary to grasp such motions through a mathematical model derived based on physical laws. a suitable mathematical model about the rigidbody dynamics of the quadcopter is usually presented in terms of equations of motions. these equations can be derived based on the newton’s second law of motion and the kinematic principles of a moving reference frame [12, 13]. that is, ̇ (1) ̇ each physical quantity vector of the quadcopter dynamics equations (1) has three components in the space (except ) and is described as follows: external forces and torques/moments acting on the quadcopter, mass and moment of inertia, translational velocities and angular rates where [ ] [ ] [ ] [ ] and [ ] thus, the quadcopter equations of motion are expressed as follows: [ ] [ ̇ ̇ ̇ ], (2) international journal of applied sciences and smart technologies volume 1, issue 2, pages 101-112 p-issn 2655-8564, e-issn 2685-9432 104 [ ] [ ̇ ( ) ̇ ̇ ( ) ]. the quadcopter is typically powered up by four motors mounted on the tips of its arms. a fixed-pitch propeller is installed on each motor to produce thrust that can be varied to propel and control the quadcopter’s motion. thus, incorporating the gravitational and control forces, the equations of motion in (2) can be enhanced to have the form as follows: { ̇ ̇ ̇ ̇ ( ) ̇ ̇ ( ) (3) 2.2 a linear state-space model the quadcopter’s dynamic equations in (3) can concisely be written in the form of a single nonlinear differential equation as follows: ̇ (4) where [ ] is the state vector [ ] is the control input vector and f(·) is a vector-valued nonlinear function of and . eachcomponent of and are described as follows [7]: longitudinal, lateral and vertical velocities, roll, pitch and yaw rates, roll, pitch and yaw angles, heave control input, roll control input, international journal of applied sciences and smart technologies volume 1, issue 2, pages 101-112 p-issn 2655-8564, e-issn 2685-9432 105 pitch control input, yaw control input. for the purpose of linear controller design, it is reasonable to linearize the nonlinear dynamic equation (4) about an equilibrium point ( ) via the taylor series expansion. at the equilibrium point ( ) the quadcopter is said to be in a trim condition, where all forces and moments acting upon the quadcopter are balance. consequently, the nonlinear dynamic equation (4) is equal to zero, that is ( ) . the state trajectory and control input of the quadcopter about the equilibrium point ( ) are given by (5) thus, by taking only the first-order terms of the taylor series expansion of the nonlinear dynamic equation (4), one may obtain a linear state-space model as follows: ̇ , (6) where ⁄ and ⁄ are jacobian matrices evaluated at the equilibrium point ( ). for simplicity, the symbol in (6) will be removed subsequently. referring to (3), one may obtain the and matrices in (6) as follows [7, 8]: [ ] [ ] (7) here, is the gravitational acceleration and other unknown non-zero entries of the and matrices denote the stability derivatives of the forces and moments with respectto the corresponding state variables and control inputs . international journal of applied sciences and smart technologies volume 1, issue 2, pages 101-112 p-issn 2655-8564, e-issn 2685-9432 106 2.3 optimal linear quadratic control stabilization problem. given the linear time-invariant state-space model (6), (7) of the quadcopter, one may design a state-feedback controller by minimizing a linearquadratic cost function as follows [10]: ∫[ ] (8) where are weighting matrices. the desirable state-feedback controller is of the form (9) where is the state-feedback controller gain matrix. applying the state feedback controller (9) to the open-loop state-space model (6) and (7) of the quadcopter, one obtains a closed-loop system given as follows: ̇ (10) thus, the controller gain matrix k is such that the matrix is hurwitz in order to result in an asymptotically stable closed-loop system. this control problem is commonly known as a stabilization or regulation of an open-loop linear system around its equilibrium point and is solvable if is stabilizable. the resulting closed-loop system is then enabled to return to the equilibrium point by eliminating the effect of any non-zero initial conditions. such a controller is then called a linear quadratic regulator [12, 13]. to obtain such a stabilizing controller that minimizes the cost function (8), one is then required to find a symmetric matrix as a solution to an algebraic riccati equation: (11) therefore, the controller gain matrix can be constructed as (12) and the minimal cost function value is given as (13) international journal of applied sciences and smart technologies volume 1, issue 2, pages 101-112 p-issn 2655-8564, e-issn 2685-9432 107 when synthesizing a desirable controller to satisfy stability and performance criteria of a closed-loop system, one has to appropriately determine the weighting matrices and . to serve this purpose, it is quite common to choose and as diagonal matrices. thus, they can be interpreted as penalties corresponding to each state and control input variables, respectively. although the weighting matrices and are in general not unique, one may follow the bryson’s rule to set their diagonal entries [10]. that is, (14) tracking problem. in practice, one may not only be interested in stabilizing the open loop system, but also in tracking a reference input. to achieve this control objective, one needs to first define an output variable required to follow the reference input . that is, , (15) where is the output matrix. thus, to design a state-feedback controller of the form (9) such that the output will track the reference input one may follow the same procedure to design the controller gain matrix as above. in other words, the tracking control problem can be solved by transforming it into the stabilization problem. in this regard, an error variable variable is defined such that (16) ̇ now, combining (6), (15) and (16), one may obtain an augmented open-loop system: ̇̅ ̅ ̅ ̅ (17) ̅ ̅ where ̅ [ ] ̅ [ ] ̅ [ ], [ ], ̅ [ ] (18) note that and are zero and identity matrices with appropriate dimensions, respectively. to track the reference input it is thus necessary to stabilizer the international journal of applied sciences and smart technologies volume 1, issue 2, pages 101-112 p-issn 2655-8564, e-issn 2685-9432 108 augmented open-loop system (18) by applying the state-feedback controller of the form (9). that is, ̅ ̅ ̅ [ ] (19) where . the resulting closed-loop system can then be written as ̇̅ ̅ (20) where [ ] (21) is hurwitz. since the closed-loop system (20) is asymptotically stable, ̇ and ̇ will converge to zero as time goes to infinity. this implies that will be equal to at the steady state. in this way, the tracking control problem has been solved by incorporating an integral control action into the closed-loop system (20). in fact, this approach will also render the closed-loop system (20) robust against perturbations due to bounded exogenous disturbances. 3 controller synthesis in this section, a state-feedback controller is designed for the quadcopter based on a linear dynamic model for hovering flight. thus, the parameter values in the linear statespace model (6) and (7) are given as follows [7]: , (22) and the gravitational accelaration . let us consider the case where the quadcopter is required to track lateral (roll), longitudinal (pitch) and directional (yaw) reference inputs. the state-feedback controller can be obtained by applying the linear quadratic control method described in subinternational journal of applied sciences and smart technologies volume 1, issue 2, pages 101-112 p-issn 2655-8564, e-issn 2685-9432 109 section 2.3. for this example, the weighting matrices [ ] [ ] are chosen to be diagonal matrices with the following entries: (23) here, the subscripts denote the state and control input variables of the augmented open-loop system (17). the controller gain matrix ̅ can then be computed using the command lqr of matlab. moreover, the efficacy of the resulting controller can be demonstrated using simulink. examples of tracking reference inputs for roll, pitch and yaw angles are considered, respectively. the time responses of these angular quantities are shown in figures 1, 2, and 3. it is obvious that the resulting controller is indeed able to stabilize the closed-loop system and also to allow the respective state variables to track the given reference inputs with zero steady-state errors. figure 1. the time response of the roll angle due to the sinusoidal reference input international journal of applied sciences and smart technologies volume 1, issue 2, pages 101-112 p-issn 2655-8564, e-issn 2685-9432 110 figure 2. the time response of the pitch angle due to the step reference input figure 3. the time response of the yaw angle due to the step reference input 4 conclusions this paper has presented the optimal linear quadratic control method to synthesize a state-feedback controller for the quadcopter. the resulting controller is effective not only to stabilize the quadcopter, but also to enable some state variables to track the given reference inputs. the tracking capability is facilitated by the integral control action incorporated into the closed-loop system. if the reference input is considered as a international journal of applied sciences and smart technologies volume 1, issue 2, pages 101-112 p-issn 2655-8564, e-issn 2685-9432 111 perturbing exogenous input, it is then clear that the closed-loop system is robust against such a perturbation. the current results can be extended to consider other control methods to address an output-feedback control problem with uncertainty related to the quadcopter model. furthermore, a more complex control problem can also be considered where there are multiple quadcopters flying in formation within a network. acknowledgements this work was supported by the laboratory of distributed aerospace systems and control within the department of aerospace and software engineering, gyeongsang national university, jinju, republic of korea. the author also would like to thank dr. sudi mungkasi for the opportunity to publish this paper in the international journal of applied sciences and smart technologies. references [1] g. cai, j. dias, and l. seneviratne, “a survey of small-scale unmanned aerial vehicles: recent advances and future development trends,” unmanned systems, 2 (2), 175–199, 2014. [2] m. hassanalian and a. abdelkefi, “classifications, applications, and design challenges of drones: a review,” progress in aerospace sciences, 91, 99–131, 2017. [3] g. hoffmann, h. huang, s. waslander, and c. tomlin, “quadrotor helicopter flight dynamics and control: theory and experiment,” in proceedings of the aiaa guidance, navigation and control conference and exhibit, 1–20, august 2007. [4] f. rinaldi, s. chiesa, and f. quagliotti, “linear quadratic control for quadrotors uavs dynamics and formation flight,” journal of intelligent & robotic systems, 70 (1–4), 203–220, 2013. [5] j. -j. xiong and e. -h. zheng, “position and attitude tracking control for a quadrotor uav,” isa transactions, 53 (3), 725–731, 2014. [6] h. mo and g. farid, “nonlinear and adaptive intelligent control techniques forquadrotoruav a survey,” asian journal of control, 21 (2), 989–1008, 2019. international journal of applied sciences and smart technologies volume 1, issue 2, pages 101-112 p-issn 2655-8564, e-issn 2685-9432 112 [7] w. wei, m. b. tischler, and k. cohen, “system identification and controller optimization of a quadrotor unmanned aerial vehicle in hover,” journal of the american helicopter society, 62 (4), 1–9, 2017. [8] m. b. tischler and r. k. remple, aircraft and rotorcraft system identification, american institute of aeronautics and astronautics, virginia, 2012. [9] m. l. civita, w. c. messner, and t. kanade, “modeling of small-scale helicopters with integrated first-principles and system-identification techniques,” proceedings ofthe 58th forum of the american helicopter society, pp. 2505–2516, june 2002. [10] j. p. hespanha, linear systems theory, princeton university press, new jersey, 2nd edition, 2018. [11] m. kanamori and m. tomizuka, “dynamic anti-integrator-windup controller design for linear systems with actuator saturation,” journal of dynamic systems, measurement and control, 129 (1), 1–12, 2007. [12] b. mettler, identification modeling and characteristics of miniature rotorcraft, springer, new york, 2003. [13] g. d. padfield, helicopter flight dynamics, blackwell publishing, oxford, 2nd edition, 2007. international journal of applied sciences and smart technologies volume 5, issue 1, pages 75-88 p-issn 2655-8564, e-issn 2685-9432 75 this work is licensed under a creative commons attribution 4.0 international license smoothing module for optimization cranium segmentation using 3d slicer gilang argya dyaksa1,*, nur arfian2, herianto3 and lina choridah4, yosef agung cahyanta1 1faculty science & technology, sanata dharma university, indonesia 2department of anatomy, faculty of medicine, public health and nursing, universitas gadjah mada, yogyakarta, indonesia 3department of mechanical and industrial engineering, universitas gadjah mada, yogyakarta, indonesia 4department of radiology, faculty of medicine, public health and nursing, yogyakarta, indonesia corresponding author : gilangad@usd.ac.id (received 28-04-2023; revised 03-05-2023; accepted 04-05-2023) abstract anatomy is the most essential course in health and medical education to study parts of human body and also the function of it. cadaver is a media used by medical student to study anatomical subject. because of limited access to cadaver and also due to high prices, this situation makes it necessary to develope an alternative anatomical education media, one of them is the use 3d printing to produce anatomical models. before 3d print the cranium, it is necessary to do the segmentation process and often the segmentation result is not good enough and appear a lot of noises. the purpose of this research is to optimize a 3d cranium based on dicom (digital imaging and communications in medicine) data processing using the smoothing modules on 3d slicer. the method of this research is to process the cranium dicom data using 3d slicer software by varying the 5 types of smoothing modules. the results with default parameter fill holes and median have better results compared to others. kernel size variations are performed for smoothing module fill holes and medians. the result is fill holes get optimal segmentation results using a kernel size of 3 mm and the median is 5 mm keywords: anatomy, cranium, 3d slicer, smoothing module http://creativecommons.org/licenses/by/4.0/ international journal of applied sciences and smart technologies volume 5, issue 1, pages 75-88 p-issn 2655-8564, e-issn 2685-9432 76 1 introduction anatomical education is one of the most important courses to learn in the health and medical field [1]. anatomy is studied using a body donor called cadavers. cadavers can be obtained in several ways, such as direct donations from people who have agreed that they will donate their dead bodies to the university or the bodies of patients who died in hospitals but no family claiming their bodies [2]. the cadavers are high in price and the number of cadavers in university is limited [3]. universitas gadjah mada (ugm) hasn't received any cadavers since 2001. these conditions are the reason for the development of anatomical education media, one of them is the use of 3-dimensional (3d) anatomical models. pandemic condition was also the reason for the development of anatomical education media using 3d technology due to the limited access to campus and the lab. 3d anatomical models provide an overview of anatomical structures more practically and are very useful for explaining some of the functions and relationships between anatomical structures. anatomical education media using 3d models can reduce costs at the institution which is the costs for cadaver storage. because of this condition the development of medical advancement and 3d of anatomical model is the one of priority in this modern era [4]. one that is often done is multi modality 3d imaging technology and analysis of dicom data using computer science and bio informatics. the purpose of this advance development is the medical workers can quickly see the visual of the patient body performed using ct or mri then the results can be viewed in 3d and medical workers can diagnose patients. 3d imaging can also be used as a reference if the patient has defects or various medical disorders, which is important information in the medical field as a way to plan or determine procedures such as surgery that must be performed on patients [5]. international journal of applied sciences and smart technologies volume 5, issue 1, pages 75-88 p-issn 2655-8564, e-issn 2685-9432 77 figure 1. graph of cadaver acceptance in the department of anatomy, faculty of medicine ugm since 1982 [3]. analysis of ct results into 3d models can be done using open source software 3d slicer that can perform medical image processing and visualization from dicom data and can be exported to 3d model mesh data such as stl and obj format [6]. 3d slicers are often used as a platform for developing and analyzing to produce prototype 3d models that suit clinical needs [7]. the result of prototype can be a 3d model or can be 3d printed to create a physical prototype of anatomical model. in this reserach, we use 3d slicer as a software for prototyping and development of cranium image analysis tools for clinical research applications and also provide an optimization using smoothing modules to make an optimal 3d model of anatomical model such as cranium that used in this research. international journal of applied sciences and smart technologies volume 5, issue 1, pages 75-88 p-issn 2655-8564, e-issn 2685-9432 78 2 methods this research only using cranium as anatomical model and the method to segmentation the cranium data is accordance with research that has been done previously using the threshold technique [8]. • dicom data dicom ct-scan 3d data of patients were obtained from the radiology department of the faculty of medicine, public health and nursing (fkkmk) universitas gadjah mada (ugm) central general hospital (rsup) dr. sardjito. • dicom viewer and segmentation the dicom file was processed using the dicom viewer and segmentation software that is 3d slicer. the software was used to view the dicom file and segment it to obtain a region of interest for this research, the cranium. threshold value that used in this research is 150 for the lower limit and the upper limit using the default value 2976. after the segmentation process, then the next step is to vary the smoothing module with the aim of eliminating some areas that are not included in the region of interest or can also be referred to as noise. • smoothing modules the smoothing module is located in the segment editor which main function is to smooth the area that has been segmented before. there are 5 types of smoothing modules that are median, opening, closing, gaussian, and joint smoothing [9]. median is to removes small extrusions and fills small gaps while keeps smooth contours mostly unchanged. applied to selected segment only. opening is to removes object smaller than the specified kernel size. closing is fills sharp corner and hole smaller than the specified kernel size. gaussian has stronger smoothing for all object details, but tend to shrink the object. joint smoothing is tend to preserving watertight between separate objects. international journal of applied sciences and smart technologies volume 5, issue 1, pages 75-88 p-issn 2655-8564, e-issn 2685-9432 79 after vary the 5 types of smoothing modules, 2 modules will be selected for the next step which is vary the kernel size to know the effect on the result of cranium segmentations. kernel size is diameter of neighborhood that will considered around each voxel. in the 3d slicer software documentation, it is explained that the greater the kernel size value, will makes the smoothing stronger and more details will be surpressed. in this study will vary 5 values of kernel size, that is 1, 2, 3, 4, and 5 mm with 3 mm is the default kernel size. 3 results and discussions this research used 3d slicer software for segmentation the cranium from dicom data. this step used the same threshold value. figure 2. user interface of the first software. figure 3. segmentation output the user interface of the first software is shown in fig. 2. this software can show three planes commonly used by the medical officer to view ct imaging results, international journal of applied sciences and smart technologies volume 5, issue 1, pages 75-88 p-issn 2655-8564, e-issn 2685-9432 80 namely axial, sagittal, and coronal, shown as the red, yellow, and green in window tabs. after opening the dicom file, segmentation was carried out to determine the region of interest. the segmentations result showed in fig. 3 is segmentation without using smoothing module. next will be discussed the results of segmentation using smoothing modules which will be shown in 2 views that is coronal plane and sagittal plane. figure 4. segmentation using smoothing module coronal plane the results of segmentation using smoothing modules can be seen in fig. 4 for coronal plane and fig. 5 for sagittal plane. when viewed from the coronal plane gaussian has poor results because many details are missing such as in the frontal and maxilla. opening also has poor results because some parts are perforated and include removing details from the cranium. join smoothing has quite good results but in the international journal of applied sciences and smart technologies volume 5, issue 1, pages 75-88 p-issn 2655-8564, e-issn 2685-9432 81 teeth the details are lost. if from the coronal view, the median and fill holes have the best results among other smoothing modules. move to sagittal view, gaussian and opening still have poor results, especially some parts of the cranium that disappear so that the anatomy parts is not very clear. join smoothing has quite good results as well, but especially in the ethmoid part the cranium loses a lot of parts so it loses a lot of detail. figure 5. segmentation using smoothing module sagittal plane comparison of segmentation results without smoothing module and smoothing module fill holes is shown in fig. 6. if using fill holes, the result is that many cavities in a bone become more closed. in addition, there are also many parts of the cranium that are connected better using this module and can be seen around the occipital and sphenoid have quite good detailed results. international journal of applied sciences and smart technologies volume 5, issue 1, pages 75-88 p-issn 2655-8564, e-issn 2685-9432 82 figure 6. segmentation using smoothing module default (left) and fill holes (right) figure 7. segmentation using smoothing module default (left) and median (right) if it use the median smoothing module that can be seen in fig. 7, the result is almost similar to fill holes, which close several cavities in the bone and can join several separate parts if segmented without using a smoothing module. but in certain parts there are several parts disconnected such as around the frontal and sphenoid. as a decision in this first step of experiment, smoothing module fill holes and medians will be selected to proceed to the second experimental step, which is to vary international journal of applied sciences and smart technologies volume 5, issue 1, pages 75-88 p-issn 2655-8564, e-issn 2685-9432 83 the kernel size of these two smoothing modules to find out the effect to the cranium segmentation. the results of cranium segmentation using smoothing module fill holes by varying the value of the kernel size can be seen in fig. 8 for coronal view and fig. 9 for sagittal view. when viewed from the coronal view for kernel size variations does not provide a significant difference. figure 8. segmentation using smoothing module fill hole coronal view. figure 9. segmentation using smoothing module fill hole sagittal view. when viewed from the sagittal view in the kernel size range from 1 mm 3 mm, in the spine and around the occipital the higher the value of the kernel size value, the cavity will be more closed and also in kernel size 3 (default value) still has a fairly good international journal of applied sciences and smart technologies volume 5, issue 1, pages 75-88 p-issn 2655-8564, e-issn 2685-9432 84 anatomical detail around the occipital compared to the kernel size of 1 mm and 2 mm. but in kernel sizes of 4 mm and 5 mm, anatomical details are reduced because the results are almost the same as kernel sizes of 1 mm and 2 mm characterized by cavities that expand back around the mandible, spine, and occipital. the best result of the kernel size variation in smoothing module fill holes is the kernel size of 3 mm. figure 10. segmentation using smoothing module median coronal view. figure 11. segmentation using smoothing module median sagittal view. the results of segmentation using the smoothing module median by varying the kernel size can be seen in fig. 10 for coronal view and fig. 11 for sagittal view. the coronal view median has a slight difference with fill holes, which is at the kernel size of 3 mm the segmentation results have a slight reduction in detail in the teeth. this reduction is not seen in any other kernel size other than 3 mm. when viewed in sagittal international journal of applied sciences and smart technologies volume 5, issue 1, pages 75-88 p-issn 2655-8564, e-issn 2685-9432 85 view, the results are almost similar to fill holes. the results at kernel sizes of 1 mm and 2 mm are similar to the results of kernel sizes of 4 mm and 5 mm. the default kernel size of 3 mm has different characteristics from other kernel sizes which can be seen in comparison with the kernel sizes of 2 mm and 4 mm which will be shown in more detail in fig. 12. figure 12. comparison result median sagittal view. kernel size 3 mm has quite good segmentation results on the spine and occipital when compared to kernel sizes of 2 mm and 4 mm. but the 3 mm kernel size has disadvantages around the sphenoid parts that do not connect and where the segmentation results in kernel sizes 2 and 4 have better sphenoid results that connect compared to the 3 mm kernel size. figure 13. comparison result median kernel size 1, 2, 4, 5 mm. international journal of applied sciences and smart technologies volume 5, issue 1, pages 75-88 p-issn 2655-8564, e-issn 2685-9432 86 kernel size other than 3 mm also has other similarities that can be seen in fig. 13, which both have details around the maxilla that are not found in the kernel size of 3 mm. therefore, for the most optimal results 5 mm kernel size was chosen even though the results were almost the same as 1 mm, 2 mm, and 4 mm, but the 5 mm kernel size subjectively had more optimal results compared to others. 4 conclusions studies on the effect of smoothing modules and kernel size have been presented on the results of cranium segmentation. the results of this study used 5 types of smoothing modules fill holes, gaussian, join smoothing, median, and opening smoothing. the results with default parameter fill holes and median have better results compared to others. kernel size variations are performed for smoothing module fill holes and medians. the result is fill holes get optimal segmentation results using a kernel size of 3 mm and the median is 5 mm. this research is limited to its application to cranium segmentation only. the direction of future research is to conduct a study of the optimal smoothing module for each anatomy part such as the brain, heart, liver, and abdomen. acknowledgements i would like to acknowledge and give my thanks to my supervisor professor. herianto and dr. nur arfian who made this work possible and for the guidance and advice through all of the writing the research. i would like also to give special thanks to dr. lina choridah from department of radiology, faculty of medicine, public health and nursing for the continuous support for the data dan also understanding when undertaking my research and writing project. references [1] k. sugand, p. abrahams, and a. khurana, the anatomy of anatomy: a review for its modernization, anat sci ed, (2010). international journal of applied sciences and smart technologies volume 5, issue 1, pages 75-88 p-issn 2655-8564, e-issn 2685-9432 87 [2] biasutto sn et al., human bodies to teach anatomy: importance and procurement— experience with cadaver donation, (2014). [3] m. m. romi, n. arfian, and d. c. r. sari, is cadaver still needed in medical education?, jpki, 8 (3) (2019) 105. [4] x. zhang, k. zhang, q. pan, and j. chang, three-dimensional reconstruction of medical images based on 3d slicer, j., complex., health sci., 2 (1) (2019) 1–12. [5] a. alhadidi, l. h. cevidanes, b. paniagua, r. cook, f. festy, and d. tyndall, 3d quantification of mandibular asymmetry using the spharm-pdm tool box, int j cars, 7 (2) (2012) 265–271. [6] djoko kuswanto and a. tontowi, development of additive manufacturing methods for reconstruction and redesign cranial bone defects in indonesia, (2014). [7] a. fedorov et al., 3d slicer as an image computing platform for the quantitative imaging network, magnetic resonance imaging, 30 (9) (2012) 1323–1341. [8] g. a. dyaksa, n. arfian, and l. choridah, development of cranium 3dimensionpuzzle products using 3d printing, (2020). [9] c. pinter, a. lasso, and g. fichtinger, polymorph segmentation representation for medical image computing, computer methods and programs in biomedicine, 171 (2019) 19–26. international journal of applied sciences and smart technologies volume 5, issue 1, pages 75-88 p-issn 2655-8564, e-issn 2685-9432 88 this page intentionally left blank international journal of applied sciences and smart technologies international journal of applied sciences and smart technologies volume 2, issue 2, pages 157–168 p-issn 2655-8564, e-issn 2685-9432 157 feasibility of smartphone camera utilization and powerpoint-based video analyzer on kinematic motion experiment: an inexpensive method e.d. atmajati 1,* , r.a. salam 2 1 department of physics education, faculty of teacher training and education, sanata dharma university, yogyakarta, indonesia 2 department of physics engineering, school of electrical engineering, telkom university, bandung, jawa barat, indonesia * corresponding author: dian.atmajati@usd.ac.id (received 30-06-2020; revised 21-07-2020; accepted 21-07-2020) abstract this study reports on the use of smartphone camera and power point application to analyze the kinematic motion experiment. this method is intended to make a better understanding of student’s concept using the tools that are commonly owned by students. the experiments performed in this study were one-dimensional (1-d) represented by falling motion and twodimensional (2-d) using parabolic motion. in evaluating the experimental results, the obtained data were compared to the theoretical values that were calculated using analytical approach. the use of this method shows great measurement results in showing dependency of falling motion due to gravitational acceleration and proofing the constant velocity at projectile motion on its horizontal plane in which it is comparable to the theoretical value. the video analyzation method also can be used as an alternative solution to the established software, even better when the higher resolution camera and frame rate were used. noting that the tools used in the international journal of applied sciences and smart technologies volume 2, issue 2, pages 157–168 p-issn 2655-8564, e-issn 2685-9432 158 experiment were common around, thus, this can be used as a replacement for the advance tools. keywords: experimental analysis, kinematic motion, physics education, power point, smartphone camera 1 introduction nowadays, misconceptions become one of a problem in physics learning [1,2]. for example, on kinematics subjects, students think if a free-fall motion is not only caused by gravitational acceleration, or another example, there is acceleration on the horizontal motion in projectile motion case. in overcoming the problems, experiments can be a solution due to the direct observation of physics phenomena by students [3]. so, they can relate the phenomena to the theory and then increase their understanding [1-4]. various devices have been used to make an easy understanding demonstration tool, such as computer-based instrumentation tool [5], game controller [1-2], camera [6-11], and smartphones [1-2,12-13]. among the others, smartphones are widely used because it has been embedded with various sensors [1-2,14]. the smartphone not only used on academics but also for other applications like spectroscopy analysis, psychiatric and medical examination, and also health monitoring [14]. in order to perform kinematic experiments, some previous studies usually use a camera to take video and then tracking the motion using various video analyzer software [7-11] besides using smartphones on mapping the motion through their accelerometer or gyroscope sensor, due to their easiness on taking video. however, the cameras used in the experiment generally are high-speed digital camera. moreover, after being captured, the videos are analyzed using additional tools such as loggerpro, video point, measurement in motion which are highly paid commercial software [15]. there is some free software such as physics toolkit, tracker, and many others, usually, it can display the digitizing coordinates automatically. however, this often causes students or users to not pay too much attention to the process of moving observed objects. therefore, to make students paying attention to the process, the motion analysis needs to be completed by the students directly. in this paper, microsoft powerpoint international journal of applied sciences and smart technologies volume 2, issue 2, pages 157–168 p-issn 2655-8564, e-issn 2685-9432 159 application will be used as a video analyzer to help students know about process of observed object movement. previously, microsoft powerpoint has been used to measure angles on digitalized radiographic images [16] and diagnose sagittal cephalometric [17]. 2 research methodology the experiments used in this study are one-dimensional (1-d) and two-dimensional (2-d) kinematics motion experiments which are represented by free fall and projectile motion, respectively. in this experiment, a tennis ball was used as an observed object. it was recorded by 6 mp canon d6000 that can take 30 frames per second (fps). in addition, a samsung galaxy a30 that has a 16 mp camera was then used as an alternative solution to perform the experiment further. the smartphone camera can capture video for 30 frames per second (fps) with a full hd (1080p) resolution video. besides, the phone was embedded with slow motion feature which will be elaborated further. figure 1. experimental setting (1) reference size ( ); (2) object; (3) camera. measurement of ball position was done by capturing video of ball movement using smartphone camera. in this experiment was used two capturing modes, normal mode international journal of applied sciences and smart technologies volume 2, issue 2, pages 157–168 p-issn 2655-8564, e-issn 2685-9432 160 and slow-motion mode. the demonstration was conducted by placing the ball at certain height ( ) facing a camera placed on a tripod. the camera has to be placed so that the whole object motion can be recorded. to measure the height of the ball precisely, a reference size ( ) needs to be drawn and shown to camera alongside the ball movement as shown in figure 1. after being recorded, the video was then analyzed using powerpoint-based video. figure 2. flowchart of powerpoint-based video analyzer process the analysis of ball position was done by following the flowchart presented by figure 2. firstly, video is inserted to microsoft powerpoint and then adjusts the video duration to only show the initial to final object which is required using “trim” feature. afterward, the length of reference size which was determined is then measured in the captured video by drawing a reference line ( ) as shown in figure 3. by comparing the international journal of applied sciences and smart technologies volume 2, issue 2, pages 157–168 p-issn 2655-8564, e-issn 2685-9432 161 line length with the actual reference size, the calibration constant is then obtained using equation (1). (1) this constant will be used to collect the actual ball position  h from the measured position at video  x as calculated using equation (2). (2) repeat the measurement steps for all captured object motion. by plotting the ball position as time function, the relation between ball position and time could be achieved. in this study, falling motion was demonstrated by dropping the tennis ball from certain height, while the projectile motion was exhibited by throwing a ball inclined at random angle. the initial velocity and angle on projectile motion were not discussed in detail. the tennis ball is chosen because its size and color are easily captured by the camera. in performing the experiment, the measurement of tennis ball was carried out by capturing the bottom edge of the ball. 3 results and discussion the measurement results of ball position on falling motion are presented in figure 4. based on the results, the ball positions detached from the video are close to the theoretical line. the camera captures objects in the same period for each frame. however, instead of moving in a constant distance, the ball moves with a distance that increases periodically. this shows that the ball's movement is getting faster. by fitting the graph to the quadratic function, the acceleration of the ball is obtained 9.86 m/s 2 , which is close to the gravitational acceleration which is 9.82 m/s 2 at the research location [18]. however, according to the theory, the ball acceleration has to be gravitational acceleration which means that the error obtained from the experiment is 0.40 % which is tolerable. international journal of applied sciences and smart technologies volume 2, issue 2, pages 157–168 p-issn 2655-8564, e-issn 2685-9432 162 figure 3. object position tracking on powerpoint figure 4. time-dependent position of ball on falling motion international journal of applied sciences and smart technologies volume 2, issue 2, pages 157–168 p-issn 2655-8564, e-issn 2685-9432 163 figure 5. time-dependent position of projectile motion (a) vertical motion (upward direction); (b) horizontal motion; (c) ball trajectory on x, y axis figure 6. video analyzer comparison on free-fall motion figure 5 shows the measurement results of ball position on projectile motion. as aforementioned, the ball movement is demonstrated by throwing the object away at a certain angle without considering the initial velocity and its inclination angle. however, the main focus of this experiment is to enhance the student’s concept regarding the projectile motion that is a unification of two independent motions of each axis [18]. its vertical movement result is shown in figure 5(a). according to figure 5(a), the fitting international journal of applied sciences and smart technologies volume 2, issue 2, pages 157–168 p-issn 2655-8564, e-issn 2685-9432 164 analysis of the experimental data is showing the polynomial trend which is appropriate with the motion with constant acceleration. further analysis shows that the acceleration is 9.82 m/s 2 . this value is close to the gravitational acceleration gained by previous study which achieves a gravitational acceleration of 9.81 m/s 2 [11]. this data is also following the theoretical analysis performed. meanwhile, the movement at horizontal axis of the experiment is a zero-acceleration motion as shown in figure 5(b) with the error gained of 3.45 % as depicted on theory [18]. based on theoretical line in figure 5 (a) and 5 (b), the initial velocity value is 4.18 m/s with a direction of motion of 65.27 o to the horizontal plane. the trajectory pattern of object motion on its horizontal and vertical axes is presented in figure 5(c). it shows that the ball movement forms a projectile motion as also gained by klein et.al. [11] with different conditions. this condition depicts that the movements of each axis are independent of each other. figure 7. comparison of smartphone and digital camera the analyzing method in this study was then compared to the established analyzing methods such as loggerpro and tracker to see how the method performed. the result is then presented in figure 6. as depicted in figure 6, the ppt-based method is comparable to other methods. it can be seen from the data distribution shown by the ppt-based which close both loggerpro and tracker. however, based on the obtained trendlines, the gravitational acceleration gained by those methods is 9.82 m/s 2 , 8.20 m/s 2 , and 10.76 m/s 2 for powerpoint-based, tracker, and loggerpro, respectively. by international journal of applied sciences and smart technologies volume 2, issue 2, pages 157–168 p-issn 2655-8564, e-issn 2685-9432 165 this condition, the proposed method can be an alternative solution to perform the analysis of kinematic videos. as aforementioned, in this study, feasibility of smartphone camera on performing a kinematic motion experiment was also investigated. this smartphone camera was used to record a free-fall motion experiment. the result is shown in figure 7. according to figure 7, object position tracked by smartphone camera (rm) is clearly recorded as like a digital camera (dc) due to the same of their frame rate. however, using a higher resolution, the smartphone camera shows a brighter video to be analyzed [19]. also, from figure 7, the video analyzation performed by power-point (rm – powerpoint) is close to loggerpro (rm – loggerpro) results as like as figure 6. the powerpoint result gives a gravitational acceleration are 9.64 m/s 2 and loggerpro 9.23 m/s 2 . by assuming that the gravitational acceleration at research location is 9.82 m/s 2 , then the deviation gained by powerpoint-based video analyzer 4% lower than loggerpro result. however, this result is highly dependent on user accuracy in marking the object’s position. but power-point makes it possible to zoom in on the video so that it helps in increasing accuracy. it shows that powerpoint-based is reliable to be used in the experiment. nowadays, many manufacturers embed their smartphone camera with slow-motion feature. by using this effect, the video frame rate can be adjusted to be higher, in this case, the frame rate is 125 fps. as can be seen from figure 7, slow-motion (sm) effect gives a rapid video tracking than regular mode (rm) due to the higher resolution, so the motion can be tracked flawlessly [20]. at this stage, the resolution of the experiment can be higher. however, to give a better understanding of constant-acceleration motion, the regular mode is more reliable. it is caused by the condition of motion that can be observed clearly. since the camera resolution on smartphone nowadays is around 16 mp or above, it will give an outstanding object tracking. then, the use of powerpointbased video analyzer makes the analysis easy, without being worried by its inaccuracy. 4 conclusions the use of smartphone camera and powerpoint-based video analyzer has been successfully used to record and analyze the motion on kinematic experiments. the international journal of applied sciences and smart technologies volume 2, issue 2, pages 157–168 p-issn 2655-8564, e-issn 2685-9432 166 results show that the object motion can be well tracked, and the ball trajectory can be reconstructed. gravitational accelerations gained by the method are 9.86 m/s 2 for free fall experiment and 9.82 m/s 2 for projectile experiments which are comparable to theory value with error gained below 1%. the results also present that powerpoint-based method is comparable to the established one such as tracker and loggerpro. moreover, the use of smartphone camera and this powerpoint method can perfectly track the object movement, even better than the use of established one. by using this combination method, the physics learning on kinematic motion can be well delivered to students and hopefully can decrease student misconceptions of the subject. references [1] p. wattanayotin, c. puttharugsa, and s. khemmani, “investigation of the rolling motion of a hollow cylinder using a smartphone’s digital compass.” phys. educ. 52, 045009, 2017. [2] c. puttharugsa, s. khemmani, p. utayarat, and w. luangtip, “investigation of the rolling motion of a hollow cylinder using a smartphone.” eur. j. phys. 37, 055004, 2016. [3] a. hamidah, e. n. sari, and r. s. budianingsih, “persepesi siswa tentang kegiatan praktikum biologi di laboratorium sma negeri se-kota jambi.” j. sainmatika. 8, 49–59, 2014. [4] l. k. wee, c. chew, g. h. goh, s. tan, and t. l. lee, “using tracker as a pedagogical tool for understanding projectile motion.” phys. educ. 47, 448–455, 2012. [5] m. basta, m. d. gennaro, and v. picciarelli, “a desktop apparatus for studying rolling motion.” phys. educ. 34, 371–375, 1999. [6] s. phommarach, p. wattanakasiwich, and i. johnston, “video analysis of rolling cylinders.” phys. educ. 47, 189–196, 2012. [7] f. vera, and c. romanque, “another way of tracking moving objects using short video clips.” phys. teach. 47, 370–373, 2009. [8] e. l. medeiros, o. a. p. tavares, and s. b. duarte, “inexpensive strobe-like photographs.” phys. teach. 47, 536–541, 2009. international journal of applied sciences and smart technologies volume 2, issue 2, pages 157–168 p-issn 2655-8564, e-issn 2685-9432 167 [9] t. terzella, j. sundermier, j. sinacore, c. owen, and h. takai, “measurement of g using a flashing led.” phys. teach. 46, 395–397, 2008. [10] j. bonato, l. m. gratton, p. onorato, and s. oss, “using high speed smartphone cameras and video analysis techniques to teach mechanical wave physics.” phys. educ. 52, 045017, 2017. [11] p. klein, s. gröber, j. kuhn, and a. müller, “video analysis of projectile motion using tablet computers as experimental tools.” phys. educ. 49, 37–40, 2014. [12] a. mazzella, and i. testa, “an investigation into the effectiveness of smartphone experiments on students’ conceptual knowledge about acceleration.” phys. educ. 51, 055010, 2016. [13] l. a. testoni, and g. brockington, “the use of smartphones to teach kinematics: an inexpensive activity.” phys. educ. 51, 063008, 2016. [14] r. d. septianto, d. suhendra, and f. iskandar, “utilisation of the magnetic sensor in a smartphone for facile magnetostatics experiment: magnetic field due to electrical current in straight and loop wires.” phys. educ. 52, 015015, 2017. [15] j. a. bryan, “investigating the conservation of mechanical energy using video analysis: four cases.” phys. educ. 45, 50–57, 2009. [16] j. k. jones, a. krow, s. hariharan, and l. weekes, “measuring angles on digitalized radiographic images using microsoft powerpoint.” west indian med. j. 57, 14–19, 2008. [17] r. s. meza, sagittal cephalometric diagnosis using power point (microsoft® office) rev. mex. ortod. 4. e8–16, 2016. [18] d. halliday, r. resnick, and j. walker, fundamentals of physics (john wiley & sons), 2010. [19] m. cruz, r. f. cruz, e. a. krupinski, a. m. lopez, r. m. mcneeley, and r. s. weinstein, “research note: effect of camera resolution and bandwidth on facial affect recognition telemed.” j. e-health. 10, 392–402, 2004. [20] e. b. blackford, and j. r. estepp, “effects of frame rate and image resolution on pulse rate measured using multiple camera imaging photoplethys mography.” proc. spie. 9417, 2015. international journal of applied sciences and smart technologies volume 2, issue 2, pages 157–168 p-issn 2655-8564, e-issn 2685-9432 168 this page intentionally left blank international journal of applied sciences and smart technologies volume 4, issue 2, pages 255-266 p-issn 2655-8564, e-issn 2685-9432 255 this work is licensed under a creative commons attribution 4.0 international license the effect of the number of cooling pads on the output air condition and effectiveness of air cooler wibowo kusbandono1,* 1department of mechanical engineering,faculty of science and technology, sanata dharma university, indonesia *corresponding author: bowo@usd.ac.id (received 01-11-2022; revised 18-11-2022; accepted 21-11-2022) abstract to get comfortable air, it can be done by using an air conditioner or air cooler. the electrical power used for the air cooler is relatively lower. this research aims to see the effect of the number of cooling pads on the output air condition and on the effectiveness of the air cooler. the research was conducted experimentally. the research was conducted by varying the number of cooling pads used, thick. the distance between the cooling pads is 1.5 cm. the air temperature inlet of the air cooler has a dry bulb air temperature of 30𝑜𝐶 with an air humidity (rh) of 60%. the lowest dry bulb air temperature achieved was 24,04𝑜𝐶 when the number of cooling pads was 6 pieces. the highest air cooler effectiveness achieved was 0.99. research has given satisfactory results. however, research can be developed by varying the cooling pad material or cooling pad pattern in order to obtain a small number of cooling pads. keywords: air cooler, effectiveness, cooling pad http://creativecommons.org/licenses/by/4.0/ mailto:bowo@usd.ac.id international journal of applied sciences and smart technologies volume 4, issue 2, pages 255-266 p-issn 2655-8564, e-issn 2685-9432 256 1 introduction air cooler is equipment used to produce cool air. unlike the air conditioner (ac) which uses the working fluid of freon, the air cooler uses the working fluid of water. therefore the air cooler is more environmentally friendly. in its operation, the ac engine uses a steam compression cycle to produce cold air, while the air cooler uses an evaporative cooling process to get cold air. in the ac engine, the freon circulation system is in a closed flow system, while in the air cooler, the water flow system is open. in an open flow system, the fluid makes contact with the outside air. the use of the air cooler does not require it to be placed in a closed room, so that the air cooler can be placed in a room with free air circulation. thus the need for oxygen from the air for users is not lacking. of course, the air flow in the room is conditioned relatively calmly, it doesn't interfere with the air flow from the air cooler. from the psychrometric chart, it can be seen that for the condition of the air entering the air cooler at a dry bulb air temperature of 350c with a relative humidity (rh) of about 46%, the lowest dry bulb air cooler outlet air temperature that can be achieved is around 250c. meanwhile, if the incoming air is with dry bulb air temperature of about 300c with a rh of about 60%, the lowest dry-bulb output air temperature can be reached 240c. the air cooler is not used to condition all the air in the room. the air output of the air cooler is intended for users only. the output air flow is only directed to people who want to enjoy the air from the air cooler. with the air cooler, the output dry bulb air temperature can reach 240c260c. at that air temperature, the user can feel the coolness of the air it produces. in the design of the air cooler, in addition to the condition of the output air being in a comfortable area, it must also have high effectiveness. unlike the ac machine, if the ac machine is designed to produce dry air [1], the air cooler is designed to produce relatively higher air humidity. the higher the rh, the cooler the air produced. the manufacture of air coolers has actually been done for a long time. it has even been patented a lot [2-8]. but in indonesia, the new air cooler is known after the air cooler was marketed in the last decade. besides being cheap, the air cooler can also be carried anywhere (can be moved around). it's just that the condition of the output air produced by the air cooler depends on the condition of the air entering the air cooler [9]. the international journal of applied sciences and smart technologies volume 4, issue 2, pages 255-266 p-issn 2655-8564, e-issn 2685-9432 257 minimum dry bulb output air temperature that the air cooler can produce is equal to the wet bulb air temperature entering the air cooler. when the dry bulb air temperature output reaches the temperature of the wet bulb air entering the air cooler, the relative humidity of the air reaches 100%. because the rh of the air has reached 100%, it is impossible for water evaporation to occur on the cooling pad which causes a decrease in air temperature again. unlike ac machines, the air cooler cannot produce the desired air condition. for areas in indonesia, air coolers are suitable for use in air conditions, with dry bulb ambient air temperatures between 280c to 350c with low relative humidity (40%-50%). the lower the relative humidity of the air, the more profitable it is because the lowest air temperature produced will be lower. research on air cooler has been carried out by several researchers. the difference between the research that has been carried out by the author and that carried out by previous researchers lies in the shape of the cooling pad, cooling pad material, air cooler geometry and research variations. there are studies that vary the intake air temperature [10], there are studies that are carried out by varying the thickness of the cooling pad [11]. some use a cooling pad made of sponge [11], some use coconut fiber [12,15]. some vary the flow of air produced by a fan [12]. there are also those that vary the temperature of the water used [13, 14] and there are those that vary the water discharge [15]. in the research that has been done by the author, the cooling pad used is relatively thin, about 1 mm, and with different materials from previous researchers. the research was conducted by varying the number of cooling-pads. in the market, the cooling pad used for the air cooler is quite thick, with a thickness of more than 5 mm, some even more than 4 cm. some of the assumptions used in this research that have been carried out are: (1) when the air passes through the cooling pad, the air undergoes an evaporative cooling process (2) when the air undergoes an evaporative cooling process, the wet bulb air temperature remains (starting when the air enters the water). cooler until the air comes out of the air cooler) (3) the evaporative cooling process runs at a constant enthalpy, no energy enters and leaves. in the evaporative cooling process, there is a decrease in dry bulb air temperature, an increase in relative humidity, an increase in specific humidity, and an increase in dew point temperature. international journal of applied sciences and smart technologies volume 4, issue 2, pages 255-266 p-issn 2655-8564, e-issn 2685-9432 258 2 research methodology the research was conducted experimentally. the object under study is a homemade air cooler. figure 1, presents a schematic of the air cooler used in the study. figure 2, presents an image of the cooling pad. the maximum number of cooling pads used is 6 pieces. the thickness of the cooling pad moistened with water is 1 mm. figure 1. schematic of an air cooler with 6 cooling pads figure 2. cooling pads air-cooler components in figure 1 1. submersible pump (submersible pump) 7. water cooler frame 2. air intake fan 8. case 3. upper water reservoir 9. air inlet 4. bottom water reservoir 10. air outlet 5. cooling pad 11. water channel 6. bottom water reservoir 12. air heater international journal of applied sciences and smart technologies volume 4, issue 2, pages 255-266 p-issn 2655-8564, e-issn 2685-9432 259 figure 3. research flow in this study, the air entering the air cooler was conditioned at a dry bulb air temperature of 300c, with a relative humidity of 60%. if the dry bulb air temperature does not reach 300c the air is preheated by the air heater before entering the air cooler, as desired. the air flow velocity is 1.5 m/s. the research implementation follows the research flow as presented in figure 3. the research was conducted by varying the number of cooling pads, using 3 cooling pads, 4 cooling pads, 5 cooling pads and 6 cooling pads. to determine the air condition, a dry bulb thermometer and a wet bulb thermometer were used. meanwhile, to measure the velocity of air flow used anemometer. other air conditions can be searched by using psychrometric charts, such as relative humidity, absolute humidity, specific volume and air enthalpy. international journal of applied sciences and smart technologies volume 4, issue 2, pages 255-266 p-issn 2655-8564, e-issn 2685-9432 260 the data from the research results are used to get the effectiveness of the air cooler. the effectiveness of the air cooler (∈) is the ratio between the actual (actual) decrease in dry bulb air temperature and the possible or ideal decrease in dry bulb air temperature [9]. the equation used to calculate the effectiveness of the air cooler is expressed by equation (1). ∈= (𝑇𝑑𝑏,𝐴−𝑇𝑑𝑏,𝐵) (𝑇𝑑𝑏,𝐴−𝑇𝑑𝑏,𝐶) …..(1) in equation (1) ∈ : air-cooler effectiveness 𝑇𝑑𝑏,𝐴 : dry bulb air temperature entering air-cooler, 0c 𝑇𝑑𝑏,𝐵 : dry bulb air temperature out of air-cooler, 0c. 𝑇𝑑𝑏,𝐶 : lowest achievable dry bulb air temperature air-cooler, 0c. 𝑇𝑑𝑏,𝐷 : wet bulb air temperature in process evaporative-cooling, 0c 3 results and discussions the results of the study are presented in table 1. it appears that the condition of the output air from the air cooler is influenced by the number of cooling pads used. for air coolers with 6 cooling pads, the air cooler output air conditions produce the lowest dry bulb air temperature. as for the air cooler with 3 cooling pads, the air condition of the air cooler output produces the highest dry bulb air temperature. this is probably due to the large surface area of the water in contact with the air passing through the cooling pad. the more the number of cooling pads on the air cooler, the more surface area of the water in contact with the air when the water flows through the cooling pad. the larger the surface area of the water in contact with the air, the more water can evaporate into the air. unlike the boiling process, the evaporation process can take place not at the boiling point temperature. the evaporation process can take place at any temperature. as long as the humidity has not reached 100% rh, the water flowing on the cooling pad can still evaporate into the air. the process of evaporation of water requires heat. in the process of evaporation of water in the cooling pad, heat is taken from the environment. in this case the environment is air. the heat is taken from the air flowing through the cooling international journal of applied sciences and smart technologies volume 4, issue 2, pages 255-266 p-issn 2655-8564, e-issn 2685-9432 261 pad. the more water that evaporates, the greater the heat of vaporization required to change the water from the liquid phase to the vapor phase. the greater the heat taken by water from the air, the lower the air temperature. the amount of heat taken from the air is equal to the amount of heat used to change the liquid phase from the water to the water vapor phase. during the evaporative cooling process, the enthalpy value of the air remains constant. the heat taken from the air is sensible heat, while the heat used to change water from the liquid phase to the vapor is latent heat. in this case, it is assumed that no heat is taken from the water used to change the phase from water to steam. table 1. conditions of air inlet and outlet of air cooler and effectiveness of air cooler number of cooling pad intake air condition air cooler air condition out of air cooler effectiveness of air cooler (ε) tdb,a (oc) twb,a( oc) rha (%) wa (gr/kg) tdb,b (oc) twb,b (oc) wb (gr/kg) 3 30oc 24oc 60% 16,1 25,15 oc 24oc 18,1 9 0,81 4 30oc 24oc 60% 16,1 24,77 oc 24oc 18,3 5 0,87 5 30oc 24oc 60% 16,1 24,38 oc 24oc 18,5 2 0,94 6 30oc 24oc 60% 16,1 24,04o c 24oc 18,6 6 0,99 international journal of applied sciences and smart technologies volume 4, issue 2, pages 255-266 p-issn 2655-8564, e-issn 2685-9432 262 figure 4. conditions of air inlet and outlet of air cooler and effectiveness of air cooler with different number of pads. a. air cooler with 3 cooling pads. b. air cooler with 4 cooling pads. c. air cooler with 5 cooling pads. d. air cooler with 6 cooling pads to get a cooler dry bulb air temperature from the air cooler, it can be done by increasing the use of cooling pads. the more the cooling pad is used, the lower the output dry bulb air temperature will be. but of course the number of cooling pads used has a limit. the limitation is, if the use of n number of cooling pads can produce output air with a relative humidity (rh) of 100%, then the addition of a cooling pad is no longer needed. the addition of a cooling pad will no longer reduce the dry bulb air temperature from the air cooler, because it is no longer possible to change the phase from water to steam anymore. in other words, the amount of water that evaporates is equal to the amount of water that international journal of applied sciences and smart technologies volume 4, issue 2, pages 255-266 p-issn 2655-8564, e-issn 2685-9432 263 condenses. on the other hand, the more cooling pads used, the lower the output airflow rate. this is because the more the cooling pad is used, the greater the air resistance that occurs. seeing the results of this study, the use of 6 cooling pads is sufficient. because the relative humidity value of the resulting air is close to 100%. it is no longer necessary to add a cooling pad. even if one additional cooling pad is added, the minimum output dry ball air that can be achieved is only 24 0c. not so influential. from table 1 and figure 4a-d, it appears that the lowest dry bulb air temperature that the air cooler can achieve is 24,04 0c. the air cooler is able to reduce the dry bulb air temperature by about 5,960c, from the initial temperature of 300c. close to the decrease in the maximum dry bulb air temperature that the air cooler can achieve, by 60c (with air intake conditions of 30 0c and 60% rh). for air humidity, the greater the number of cooling pads, the greater the specific humidity value (ω) produced. the specific humidity of the air increased from 16.1 grams of water/kg dry air to a maximum of 18.66 grams of water/kg dry air. this is because the water content in the air increases from the evaporation of water passing through the cooling pad. likewise, the value of the relative humidity of the air (rh), increased from the original 60% to all above 90%. the highest produced by the air cooler, the air output of the air cooler has a rh of almost 100%. the effectiveness of the air cooler depends on the number of cooling pads used. in this study, the effectiveness of the air cooler from the lowest value to the highest value, for the air cooler with a total of 3 cooling pads, 4 cooling pads, 5 cooling pads and 6 cooling pads, respectively, was 0.81; 0.87; 0.94; 0.99. this is understandable because the effectiveness value depends on the dry bulb air temperature from the air cooler outlet. the lower the dry bulb air temperature, the higher the effectiveness value (equation (1)). due to the difference in temperature between the dry bulb air temperature entering the air cooler and the dry bulb air temperature, the air cooler output is getting bigger. the effectiveness of the air cooler is directly proportional to the difference between the dry bulb air temperature inlet and the output. as is known, the difference between the lowest dry bulb air temperature that can be achieved by the air cooler (the temperature at which the air has a 100% rh) and the dry bulb air temperature entering the air cooler remains constant. from the results of this study, research can be developed with the target of producing outdoor air conditions that have a relative humidity of 100% of the air but with international journal of applied sciences and smart technologies volume 4, issue 2, pages 255-266 p-issn 2655-8564, e-issn 2685-9432 264 a number of cooling pads that are smaller than 6 cooling pads. it can be done by using another cooling pad, it can be with a different cooling pad material or with a different cooling pad pattern. 4 conclusions the study resulted in conclusions (a) for air coolers with 3, 4, 5, 6 cooling pads, respectively producing dry bulb air temperatures of: 25,150c; 24,770c; 24,380c and 24,04 0c. (b) for air coolers with 3, 4, 5, 6 cooling pads, respectively, the effectiveness of the air cooler is 0.81; 0.87; 0.94 and 0.99. research can be developed by minimizing the number of cooling pads, for example by using a cooling pad material or a different cooling pad pattern. acknowledgements the author would like to thank the university of sanata dharma which has provided financial support, so that this research can be completed. references [1] w. arismunandar, s. heizo, “penyegaran udara edisi ke 4”, pradnya paramita. jakarta. 1991. [2] j. k. jain, d.a. hindoliya, “experimental performance of new evaporative cooling pad material”, mechanical engineering department, ujjain polytechnic college, ujjain (m.p.) 456010, india, 2011. [3] charles w. albrecht, evanston, wyo, “evaporative air cooler”, united state patent, patent number 4,953,831, date of patent: sep. 4, 1990. [4] james a. brock, alexander, ark, “portable evaporative air cooler”, united state patent, patent number des. 337,817, date of patent: jul. 27, 1993 [5] peter sydney wright, blackwood, australia, “off-road evaporative air cooler”, united state patent, patent number, des. 433,111, date of patent: oct. 31, 2000 [6] william r. calton, scofield dr., cupertino, “evaporative cooling”, united state patent, patent number 5,715,698, date of patent: feb. 10, 1998. [7] united state patent, patent number 5,168,722, date of patent: dec. 8, 1992 international journal of applied sciences and smart technologies volume 4, issue 2, pages 255-266 p-issn 2655-8564, e-issn 2685-9432 265 [8] james a. brock, alexander, ark, “portable evaporative air cooler”, united state patent, patent number des. 337,817, date of patent: jul. 27, 1993.a [9] r.s. khurmi, j.k. gupta, “a text book of refrigeration and air conditioning”, eurasia publishing house (p) ltd, ram nagar, new delhi-110055. 1995. [10] doddy purwadianto, petrus kanisius purwadi, “hubungan kondisi udara masuk dengan kondisi udara keluaran air cooler”, jurnal ilmiah widya teknik, 20(2), 2021. [11] i nyoman suryana, i nengah suarnadwipa, hendra wijaksana, “studi eksperimental performansi pendingin evaporative portable dengan pad berbahan spon dengan ketebalan berbeda”, jurnal ilmiah teknik desain mekanika, 1(1), september 2014. [12] ekadewi a. handoyo, fandi dwiputra suprianto, selrianus, “penggunaan serabut kelapa sebagai bantalan pada evaporative cooler”, seminar nasional teknik mesin 3, 30 april 2008, surabaya, indonesia, 2008. [13] toni dwi putra, nurida finahari, “pengaruh perubahan temperatur media pendingin pada direct evaporative cooler”, proton: jurnal ilmu – ilmu teknik mesin, 3(1), 2011. [14] hendra listiono, azridjal aziz, rahmat iman mainil, “analisis evaporative air cooler dengan temperatur media pendingin yang berbeda”, jurnal online mahasiswa fakultas teknik. 2(2). 2015. [15] m.d. amri, b. yunianto, “pengaruh debit aliran air terhadap efektifitas direct evaporative cooling dilengkapi cooling pad serabut kelapa”, jurnal teknik mesin s-1, 2(2), 2014. international journal of applied sciences and smart technologies volume 4, issue 2, pages 255-266 p-issn 2655-8564, e-issn 2685-9432 266 this page intentionally left blank international journal of applied sciences and smart technologies international journal of applied sciences and smart technologies volume 1, issue 1, pages 51–64 issn 2655-8564 51 the improvement of watershed algorithm accuracy for image segmentation handwritten numbered musical notation kartono pinaryanto department of informatics, faculty of science and technology, sanata dharma university, yogyakarta, indonesia corresponding author: kartono@usd.ac.id (received 13-05-2019; revised 21-05-2019; accepted 21-05-2019) abstract in the implementation of image processing to translate the image of the numbered musical notation into a numerical character requires some initial process that must be passed like image segmentation process. the advantage of successful segmentation process is that it can reduce the failure rate in the object recognition process. segmentation process determines the success of object recognition process, it takes segmentation algorithm that can perform accurate object separation. the combination segmentation process developed in this research used projection profile algorithm, watershed and non-object filtering. profile projection algorithm is used to crop the image of the musical horizontally and vertically. the watershed algorithm is used to segment the numerical object of numerical notation generated from the projection profile process. non-object filtering is a continuation of the watershed algorithm that includes the non-object reduction process and the process of combining objects so that the original object segment will be generated. based on the results of the research, the accuracy of the segment on watershed segmentation is higher than watershed segmentation without combination of . international journal of applied sciences and smart technologies volume 1, issue 1, pages 51–64 issn 2655-8564 52 keywords: watershed algorithm, profile projection algorithm, non-object filtering, image number notation musical 1 introduction indonesian regional songs are a valuable cultural heritage of indonesia and have important characteristics in each region. each regional song provides valuable advice or knowledge to the younger generation. regional songs in addition to consisting of sounds also have a tone or music in the form of score numbers notation [1] so that we as young people are expected to be able to understand and preserve regional songs. one way to understand folk songs is to learn folk songs, especially on scores of number notations by performing image processing. in the application of image processing to translate the image number notation into numerical characters requires several initial processes (pre-processing) that must be passed like the image segmentation process. the process of image segmentation is a process to separate one object from another object. one advantage of the success of the segmentation process is that it can reduce the level of failure in the object recognition process. in research [2] a method of combining intensity filtering (high pass filtering and low pass filtering) was developed as image pre-processing for noise reduction and watershed transformation to produce better quality segmentation. the result of combining this method is able to reduce excessive over segmentation. based on research [3] river segmentation is an important process in the river tracking system using unmanned aerial vehicles. this process greatly determines the success of the river tracking system, this is because the output of image segmentation is an input that determines the outcome of the next process. based on the research [4] profile projection segmentation algorithm on the javanese literary text document image hamong tani is a relatively good algorithm for document image segmentation with an accuracy of and standard deviation of . based on the research [5] it was tested on batak characters and the results of segmentation yielded a percentage value of truth segmentation ranging from to with a confidence level of . based on the research [6] which examined the image segmentation of braille documents using the projection profile method. the international journal of applied sciences and smart technologies volume 1, issue 1, pages 51–64 issn 2655-8564 53 results of the study were able to distinguish front side dots (recto) and back side dots (verso). based on research [7] which examines the text image segmentation on soil maps using radon transformation based projection profile can be used to cluster text blocks. based on research [8] which examines intelligent systems with the introduction of numerical music notation (nmrpis) automatically using optical music recognition (omr). based on the results of the experiment the level of introduction of nmrpis reached and the performance rate was the watershed segmentation algorithm is one of the segmentation algorithms based on topographic forms [9]. based on the research [10] the combination of medical image segmentation algorithms, namely reconstruct gradient, float-point active-image, watershed algorithm and the grab cut algorithm that functions to cut the image smoothly. in quality the combination of algorithms produces a good image, so that in the process of analysis and analysis will give better results. based on research [11] watershed segmentation algorithms on javanese literary text document images produce many objects, so the results of watershed segmentation are not good with an accuracy of which can increase failure in the object recognition process. the results of watershed segmentation contain many objects and not objects so it is necessary to filter out objects and non-objects by throwing away not objects so that the expected level of accuracy is more accurate. given the importance of the results of the segmentation process as an initial process of object recognition, a segmentation algorithm is needed that can accurately separate objects. to correct this problem, it is necessary to create a score segmentation algorithm, number notation on a regional song which is a combination of a watershed algorithm, profile projection and filtering not an object. this research is expected to help attract the interest of the younger generation in preserving indonesian regional culture and applying combinations can help improve the performance of the proposed algorithm in terms of the accuracy of segmentation results. 2 research methodology the research method discusses the design of research methods, implementations and datasets. in the design of the research method, in broad outline, it discusses the flow of international journal of applied sciences and smart technologies volume 1, issue 1, pages 51–64 issn 2655-8564 54 the process of watershed segmentation combinations and the process flow segmentation wathershed without a combination. in the implementation phase, it discusses the results of implementation of the watershed segmentation program. for datasets explain the types and examples of images used. 2.1 design of research methods in the research method phase, it discusses the design of the combination segmentation process flow between watershed algorithms, profile projection and filtering rather than objects with watershed algorithmic segmentation without combination. the design of the segmentation process flow is shown in figure 1. start end 1 binerization 2 initial stage dilation 3 profile projection segmentation 5 erosion 4 dilated 6 morphological gradient 7 watershed segmentation 8 segment coordinating search process 9 reduction process not object 10 object combination process 11 coordinate conversion process to image watershed segmentation without combination figure 1. designing the process flow of the image segmentation number notation figure 1 shows the flow of the combined segmentation process from the watershed algorithm, profile projection and filtering not objects that include binaryization processes, dilated early stages until the process of converting coordinates to images, while the design of watershed algorithm segmentation without combination includes international journal of applied sciences and smart technologies volume 1, issue 1, pages 51–64 issn 2655-8564 55 binary processes, initial dilation, dilation, erosion, morphological gradient and watershed segmentation. the binary process is the process of changing the gray scale format into a binary image (black and white). in the binary process, thersholding will be sought, then the point with a certain gray scale value range is changed to black and the rest to white. in this study using the otsu method [9]. the purpose of the otsu method is to divide the gray scale image histogram into two different regions automatically without requiring user assistance to enter the threshold value. the initial dilation process is the process of enlarging the size of an object and repairing a separate object by adding layers around the object with 8-connected. the purpose of this process is to smooth the binary image so that separate pixels will join into a complete object. profile projection is the process of changing a binary image into a single dimension array (histogram) that is perpendicular to the x axis or y axis. image profile projection is divided into 2, namely horizontal profile projections and vertical profile projections. horizontal profile projection is the number of black pixels perpendicular to the axis using equation (1): ∑ (1) whereas vertical profile projection is the number of black pixels perpendicular to the axis using equation (2): ∑ (2) description of equations (1) and (2), namely s are images in row and column , are vertical profile projections, are horizontal profile projections, is column or image width, row or image height, are indexes for rows and is index for columns . dilation process [9] is a process to increase the size of an object by adding layers around the object with 8-connected. the purpose of the dilation process is slightly different from the initial dilation process, which is to produce a morphological gradient international journal of applied sciences and smart technologies volume 1, issue 1, pages 51–64 issn 2655-8564 56 image. the process of erosion [9] is the process of reducing the size of an object by eroding the layer around the object with 8-connected. the purpose of this process is to produce a morphological gradient image. the morphological gradient process [9] is the process by which the new image produced is the result of the difference between the dilation process and the erosion process. the morphological gradient process aims to prevent excessive segmentation. the watershed segmentation process [9] is a regionally based segmentation process that is carried out by dam formation or watershed lines. segment coordinate search process is the process of finding the edge of each segment result represented in the form of a row index and segment image column index. the coordinate search process aims as the initial process for reduction rather than objects, combining objects and labeling sequence segments. the segment coordinate search process is carried out using profile projections horizontally and vertically without going through the cutting process. segment coordinate search results generate a list in the form of a table which includes: 1. a segment number is the result of a numbered segment, 2. the segment line x1 represents the upper edge segment, 3. the segment line y1 represents the left edge segment, 4. the segment line x2 represents the lower edge segment, and 5. the segment line y2 represents the right edge segment the reduction process is not an object is the process of removing the results of a segment not an object. the reduction process aims to improve the level of accuracy of segmentation. because the results of segmentation are influenced by the morphological gradient process, the reduction process is divided into 2 stages, namely: 1. the first stage is to reduce not the object in the background segment, and 2. the second stage is to reduce segments not special objects in notations that have holes. such as number notations and , where each time a segmentation occurs, it will produce segments instead of objects in the form of holes that are considered as object segments. so a non-object segment is a subset of an object segment so that it can be defined using equation (3): international journal of applied sciences and smart technologies volume 1, issue 1, pages 51–64 issn 2655-8564 57 if segment a ⊂ segment b, then delete segment a (3) the process of combining objects is the process of uniting two objects resulting from a segment into a segment object. on the results of watershed segmentation in high notes or low tones 2 segments of objects will be formed so that the object merging process needs to be done. in the point segment and tested segment, a merger process can be carried out if vertically the point segment has 1 area or region with the tested segment, so that it can be defined using equation (4): if the point segment ∩ segment is tested, then join the two segments (4) the process of converting coordinates to images is the final stage of the segmentation process. the process of coordinating to image is the process of changing the coordinate axis that has undergone a process of reduction rather than an object and the incorporation of objects into segment image files. 2.2 implementation in the results of the implementation of the combination watershed segmentation program is illustrated in figure 2. (a) (b) figure 2. ilustration of segmentation results: (a) line 1 sub segmentation results. (b) results of segment image detail international journal of applied sciences and smart technologies volume 1, issue 1, pages 51–64 issn 2655-8564 58 2.3 dataset the dataset used is the image of handwritten number notation on paper (analog data), then the scan process and crop are carried out so that it produces the type of image file jpg (digital data) with a file size of pixels. table 1 is a list of datasets that will be used as test data. the category column consists of 3 types of categories, namely simple, medium and complex. in the simple category is a score image that does not have a flat line segment or legato, the medium category is a simple category image and has a 1-level flat line segment or legato, and a complex category is a medium category score image and has a combination of 2-level flat line segments or combinations flat line with legato. examples of original images of number notation is shown in figure 3. table 1 . feature image dataset number notation no image name title origin category 1 feature image 1 cublak cublak suweng central java simple 2 feature image 2 naik naik ke puncak gunung maluku simple 3 feature image 3 ayo mama maluku medium 4 feature image 4 ampar ampar pisang south borneo medium 5 feature image 5 anak kambing saya east nusa tenggara medium 6 feature image 6 gelang sipaku gelang west sumatra medium 7 feature image 7 suwe ora jamu yogyakarta complex 8 feature image 8 burung tantina maluku complex 9 feature image 9 yamko rambe yamko papua complex 10 feature image 10 cik cik periok west borneo complex figure 3. original images are handwritten number notations. international journal of applied sciences and smart technologies volume 1, issue 1, pages 51–64 issn 2655-8564 59 3 results and discussions the results of segmentation testing and discussion were carried out by testing the level of accuracy of the combination watershed and watershed segmentation without combination. testing the results of watershed segmentation accuracy without a combination only displays segmentation results. in testing the results of the accuracy of segmentation watershed combinations in addition to displaying the results of segmentation, it also displays the results of a non-object reduction process and object merging. 3.1. results of segment accuracy in combined watershed segmentation in table 2 the results of the accuracy of the watershed segmentation combination with the average test results of the level of accuracy of segments in combination watershed segmentation is 99.74%. the factor that causes the level of accuracy of the segment not to reach 100% occurs in the 4th image which experiences errors of 6 objects. table 2. results of combined watershed segmentation accuracy feature image number of objects accuracy (%) original image segmentation results image reduction not object object merging true false average figure 4 contains an example of the results of the image segmentation of the correct combination of watershed. figure 4 point a is the result of watershed segmentation in the form of background segments, hole segments and score segments. figure 4 point b international journal of applied sciences and smart technologies volume 1, issue 1, pages 51–64 issn 2655-8564 60 is an example of the results of a background segment and a hole segment is not an object so that it will experience a reduction process not an object. figure 4 point c is an example of 2 segments which are low tones so that they will experience a process of combining objects. figure 4 point d is an example of segmentation results that have taken the watershed combination process and the three segments are the correct segments. morphological gradient image segment 1 (background segment) results of watershed segmentation point a experience a reduction process not an object point b experience the process of combining objects point c the final result of the watershed segmentation combination point d segment 2 segment 3 segment 4 segment 5 (hole segment) segment 6 segment 1 (background segment) segment 5 (hole segment) segment 3 segment 6 segment 2 segment 3 segment 4 segment 3 figure 4. examples of combined image results in combination watershed international journal of applied sciences and smart technologies volume 1, issue 1, pages 51–64 issn 2655-8564 61 3.2. results of segment accuracy in watershed segmentation without combination in table 3, the accuracy of is obtained for the case of watershed segmentation without combination. the factor that causes decreased accuracy is the process like figure 5. in figure 5 point a is the result of watershed segmentation without combination. figure 5 point b is the remainder of the non-object segment, namely the background segment and the hole segment does not undergo a reduction process. figure 5 point c segment 3 and segment 5 do not experience the process of combining objects into low notes. figure 5 point d is the final result of watershed segmentation without combination. table 3. result of segment accuracy in watershed segmentation without combination feature image number of objects accuracy (%) original image segmentation results image true false average international journal of applied sciences and smart technologies volume 1, issue 1, pages 51–64 issn 2655-8564 62 morphological gradient image segment 1 (background segment) results of watershed segmentation point a not undergoing a reduction process not an object point b do not experience the process of merging objects point c the final result is watershed segmentation without combination point d segment 2 segment 3 segment 4 segment 5 (hole segment) segment 6 segment 1 (background segment) segment 5 (hole segment) segment 3 segment 6 segment 1 (background segment) segment 2 segment 3 segment 4 segment 5 (hole segment) segment 6 figure 5. examples of watershed image segmentation results without combination 4 conclusions based on the results of research from system testing it can be concluded that the level of accuracy of segments in combination watershed segmentation is higher than the level of accuracy of watershed segmentation without a combination of . increased accuracy of indicates that the combination watershed segmentation algorithm is better when compared to the watershed segmentation algorithm without combination. so that the combination of watershed segmentation algorithms can be used in the object recognition process. references [1] s. wijayanti, seni budaya (musik) kelas x sma negeri 1 pati, seni budaya kelas x sma, p. 2006, (2006). international journal of applied sciences and smart technologies volume 1, issue 1, pages 51–64 issn 2655-8564 63 [2] murinto and a. harjoko, “segmentasi citra menggunakan watershed dan itensitas filtering sebagai pre processing,” seminar nasional informatika, 43–47, mei 2009. [3] d. rahmawati, a. harjoko, and r. sumiharto, “purwarupa sistem tracking sungai menggunakan unmanned aerial vehicle,” indonesian journal of electronics and instrumentation systems, 3 (2), 157–164, 2013. [4] a. r. himamunanto and a. r. widiarti, “javanese character image segmentation of document image of hamong tani,” digital heritage international congress, 1, 641–644, 2013. [5] a. r. widiarti, a. harjoko, s. hartati, and marsono, “implementasi model segmentasi manuskrip beraksara jawa pada manuskrip beraksara batak,” proceeding seminar nasional inovasi dan teknologi informasi, 81–84, oktober 2014. [6] t. shreekanth and v. udayashankara, “a two stage braille character segmentation approach for embossed double sided hindi devanagari braille documents,” international conference on contemporary computing and informatics, 533–538, november 2014. [7] s. biswas and a. k. das, “text segmentation from scanned land map images using radon transform based projection profile,” proceedings of the international conference of soft computing and pattern recognition (socpar), 413–418, 2011. [8] d. min, “research on numbered musical notation recognition and performance in a intelligent system,” institute of information and engineering hunan university of science and engineering, 2–5, 2011. [9] r. c. gonzalez and r. e. woods, “digital image processing: 2nd,” publishing company, usa, 2002. [10] y. zhang and x. cheng, “medical image segmentation based on watershed and graph theory,” no. 1, 1419–1422, 2010. [11] k. pinaryanto and a. r. widiarti, “implementasi segmentasi citra dokumen teks sastra jawa menggunakan algoritma watershed,” undergraduate thesis, universitas sanata dharma, 2009. international journal of applied sciences and smart technologies volume 1, issue 1, pages 51–64 issn 2655-8564 64 this page intentionally left blank international journal of applied sciences and smart technologies international journal of applied sciences and smart technologies volume 3, issue 1, pages 65–82 p-issn 2655-8564, e-issn 2685-9432 65 opencv image processing for ai pet robot abhishek ghoshal1,*, aditya aspat1, elton lemos1 1department of computer engineering, xavier institute of engineering, mahim, mumbai, maharashtra, india *corresponding author: abhighosh98@gmail.com (received 23-07-2020; revised 21-12-2020; accepted 21-12-2020) abstract the artificial intelligence (ai) pet robot is a culmination of multiple fields of computer science. this paper showcases the capabilities of our robot. most of the functionalities stem from image processing made available through opencv. the functions of the robot discussed in this paper are face tracking, emotion recognition and a colour-based follow routine. face tracking allows the robot to keep the face of the user constantly in the frame to allow capturing of facial data. using this data, emotion recognition achieved an accuracy of 66% on the fer-2013 dataset. the colour-based follow routine enables the robot to follow the user as they walk based on the presence of a specific colour. keywords: opencv, image processing, ai pet robot 1 introduction there are many health benefits of owing a pet [1]. they can increase opportunities to exercise, get outside and socialize. there are also a slew of health benefits such as decreased blood pressure, decreased cholesterol levels and better mental health. however, there are also a lot of drawbacks of having to own a pet [2]. there can be issues of disease transmission and infections from pets. there also other problems like international journal of applied sciences and smart technologies volume 3, issue 1, pages 65–82 p-issn 2655-8564, e-issn 2685-9432 66 restrictions from housing societies, training, cleaning up, extra expenses and spending time with the pet to ensure its well being. there have been numerous cases of cruelties and animals across the world. we propose the development of a cheap “ai pet” which can help provide the benefits of having a pet. this machine will be able to seamless interact with humans and indulge in activities like real pets can. as this pet will be a machine, a lot of the drawbacks of owing a pet will be nullified such as the transmission of diseases and infections, requirement to spend time, cleaning up. 2 existing systems as part of the research conducted before starting the implementation of the pet robot, we looked through some of the commercially available products. we have also mentioned below the algorithms that we have looked at before creating the different functionalities of the pet robot. 2.1. commercial products some of the earliest products that resembled an artificial pet can be pinpointed back to the late 1990s. tamagotchis first went on sale in japan in 1997. they were small, handheld devices with a screen and buttons. the ‘pet’ on the screen would evolve in various ways depending on user inputs. over the years there have been many more artificial pets created but pet robots are much more recent. one of the latest products which has been hugely acclaimed is sony’s aibo. it boasts of numerous artificial intelligence based features including but not limited to facial recognition, emotion recognition, automatic battery charging and others. however it also comes with a price tag to match it; $1800. there are other pet robots with similar features but all of them cost more than $1000. it is an important consideration for us to be able to limit the development cost of our robot and to use cheap and commercially available parts. 2.2. face detection using haar cascade haar cascade is a machine learning object detection algorithm proposed by paul viola and michael jones in their paper “rapid object detection using a boosted international journal of applied sciences and smart technologies volume 3, issue 1, pages 65–82 p-issn 2655-8564, e-issn 2685-9432 67 cascade of simple features” in 2001 [3]. it is a machine learning based approach where a cascade function is trained from a lot of positive and negative images. it is then used to detect objects in other images. opencv offers pre-trained haar cascade algorithms, organized into categories (faces, eyes and so forth), depending on the images they have been trained on. we will be using the haar cascade algorithm for facial recognition from a live feed. for this we will have to incorporate the “haarcascade frontalface default.xml” into our project. from an input video feed of say 640 x 480 pixels, the xml file applies the algorithm on individual frames. it then returns the x and y coordinates that help in forming a rectangle around the face of the user. these x and y coordinates can then be used to cut out the face from the image for further processing. 2.3. facial expression recognition this implementation is using a convolutional neural network to classify the fer-2013 [4] dataset for the task of emotion recognition. there is also a creation of a real time system for validation of the model. fer-2013 is a large dataset with over 30,000 monochrome images of size 48 x 48 pixels. the initial model achieved an accuracy of 56%. after making changes based on the results, we improved the accuracy to 66%. we will try to further improve this result. two models were compared in this paper with the aim to get good results with simpler architectures. initial model that was proposed was a standard fully-convolutional neural network. this model achieved an accuracy of 66% with 35,887 images in the dataset. for real-time images, the input images were pre-processed before being passed to the model. this including conversion of images to grayscale and resizing the images to 48 x 48 pixels as this was the format for the fer-2013 dataset. 3 implementation methodology our robot has an architecture closely related to that of a living pet. the computer works as the brain of the pet, the pi4b acts as the spine, the camera as the eyes while the arduino uno and its motors act as limbs. international journal of applied sciences and smart technologies volume 3, issue 1, pages 65–82 p-issn 2655-8564, e-issn 2685-9432 68 in this subsection, we describe our hardware design, which includes vision unit, motor unit, communication unit, and interfacing with robot. • vision unit the vision unit is responsible for providing vision to the computer for performing image processing. the vision unit comprises of the pi4b and the raspberry pi camera module. the raspberry pi camera module captures a video stream and that stream is hosted by the pi4b by using the rpi-cam-web-interface. the computer picks up the stream from the hosted site and can now use the data for its programs. • motor unit the motor unit is responsible for the movement of the robot. it consists of actuators like dc motor and servos that are controlled by the arduino uno. the motor unit is powered by a 12v battery. the arduino is responsible for overseeing the motor operations. however the decision as to what should the arduino uno do is taken by the computer. • communication unit while the robot receives most of its inputs through the mic, camera and other sensors, most of the processing happens on a computer. to be able to send data from the robot to the computer a socket connection for communication has been set up using the tcp protocol. this helps to get information to and fro from the computer and the pi4b and arduino uno on the robot. • interfacing with robot there are two ways to interact with the robot. the user can “shake the hand” of the robot by moving a joystick mounted on the robot and proceed by saying something voice commands. alternatively commands can also be input in the cli present on the computer. international journal of applied sciences and smart technologies volume 3, issue 1, pages 65–82 p-issn 2655-8564, e-issn 2685-9432 69 4 implementation in this section we elaborate on the specific algorithms we have implemented. as our major processing tasks are dominated by image processing, there is extensive use of opencv and deep learning algorithms in this section. 4.1. face recognition the robot is trained to remember its creators. we have trained a model to remember our faces using convolution neural network. the computer takes the data from the camera and compares it with the model. if it does not find a match, it assumes that it does not know that person and hence calls it an unknown face. however if it does find a match, it is able to print out the name associated with the matched face. this module works in conjunction with the emotion recognition module. 4.2. emotion recognition the robot is also trained to remember our expressions when we feel a certain emotion like happiness, sadness, fear, anger and surprise. for this we have used convolutional neural network. the computer takes the data from the camera and compares it with the model. it will then print out the emotion that matches the most features of the tested data. this module is explained below further in detail. 4.2.1. dataset the process began with acquiring the right dataset for our project. this was the fer-2013 dataset that was also used in [5] and can be acquired from kaggle [4]. this dataset contains 32,557 labeled images of size 48 x 48 pixels. the given labels are happy, sad, angry, fear, surprise and neutral. we performed some basic exploratory data analysis on this dataset and understood the distribution of the data is show in figure 1. international journal of applied sciences and smart technologies volume 3, issue 1, pages 65–82 p-issn 2655-8564, e-issn 2685-9432 70 figure 1. dataset bar graph figure 2. example of images in dataset to try to improve the performance of the model, we took a preemptive measure and added 255 images of ourselves spread into the different emotion categories. we took images of our upper bodies and to match our own data to the data in fer-2013, some processing was needed. figure 2 shows an example of images in dataset. figure 3 illustrates types of images added. the steps taken to achieve are explained in figure 4. international journal of applied sciences and smart technologies volume 3, issue 1, pages 65–82 p-issn 2655-8564, e-issn 2685-9432 71 figure 3. types of images added figure 4. pre-processing of images before adding to dataset before we could proceed to create the model, the target variable that is the “emotion” column was converted in to a encoded column. this is done because machine learning models are only able to perceive numerical values. words like “happy” or “sad” hold no international journal of applied sciences and smart technologies volume 3, issue 1, pages 65–82 p-issn 2655-8564, e-issn 2685-9432 72 meaning for it. hence, we convert such columns using encoding that replaces the possible words with numbers. for example, happy = 1, sad = 2, fear = 3 and so on. 4.2.2. training figure 5. facial expression recognition flow chart figure 5 shows the basic steps taken to perform the training process. these steps have been further elaborated below. we referred to [5] to help get an idea of what type international journal of applied sciences and smart technologies volume 3, issue 1, pages 65–82 p-issn 2655-8564, e-issn 2685-9432 73 of model needs to be created and what the architecture of this model should be. we created a cnn that takes an input shape array of (48,48) and the final output is an array of probabilities which shows which of the classes the given image most closely belongs to. figure 6 shows the model architecture. figure 6. model architecture looking at the model we can see the input shape of the first layer is 48 x 48 since the size of the images in the dataset in 48 pixels by 48 pixels. the goal of the maxpooling international journal of applied sciences and smart technologies volume 3, issue 1, pages 65–82 p-issn 2655-8564, e-issn 2685-9432 74 layers is to reduce the dimensionality of the image so that we can extract meaningful features from it. 4.3. colour-based follow function the robot is trained to follow colour. we used the tools available in opencv to extract information like color from an image. we set a color range matching a very unique color of a particular shoe we have. the idea is that the robot should not receive presence of multiple areas of the same colour through its camera. hence we have choosen a relatively unique colour for the follow mechanism; fluorescent yellow. as shown in [6], we are able to create a circle around the pixels with the required color. this in turn gives us the radius of that circle. using the size of this radius we then determine the approximate distance of the robot from the user. we have used simple ifcondition statements based on the radius of this circle. along with this we have defined two thresholds experimentally. these thresholds are used with the if-condition statements to give inputs to the motors of the robot. • if the radius is smaller than lower threshold, move forward (robot is too far from the user). • if radius is greater than the upper threshold, move back (robot is to close to the user). • if radius is between lower and upper threshold, stay. using the location of the circle on the screen we have also defined conditions for the robot to turn left or right. 5 analysis we provide analysis in this section. this includes emotion recognition and colourbased follow function. 5.1. emotion recognition following is the performance of our model. we achieved an accuracy of 56% in the testing set however we find that we have greater accuracy in real time testing conditions international journal of applied sciences and smart technologies volume 3, issue 1, pages 65–82 p-issn 2655-8564, e-issn 2685-9432 75 of specific emotions like happy, angry, neutral and surprise. figure 7 shows the performance of this model through the epochs. figure 7. model performance per epoch in the figure above we have shown the performance of the model for 10 epochs. we can see that the model’s performance on the training set starts surpassing its performance on the validation set. this gap widens as we train for more epochs as shown below in figure 8. figure 8. overfitting of model international journal of applied sciences and smart technologies volume 3, issue 1, pages 65–82 p-issn 2655-8564, e-issn 2685-9432 76 as we see that the training accuracy of the model crosses 90% whereas the validation accuracy actually starts dropping from 56%. this suggests that our model is overfitting the dataset. figure 9 shows the accuracy and loss representation of our results. figure 9. accuracy and loss representation we can see from both these graphs that after a certain point, training accuracy keeps on increasing but validation accuracy plateaus out. this further shows us that the model is overfitting the data and certain measures need to be taken. reference [7] suggests a few steps we can try to stop our cnn from overfitting a dataset. the steps we have used are: 1. use a simpler model. 2. add regularization. 3. use batch normalization. international journal of applied sciences and smart technologies volume 3, issue 1, pages 65–82 p-issn 2655-8564, e-issn 2685-9432 77 4. use dropout. let us understand what these do to our model. references [7,8] explains that regularization works on the assumption that using smaller weights simplifies the model and prevents overfitting. l2 regularization or ridge regularization term is the sum of square of all feature weights as shown in the equation below. l2 forces weights to be small but does not make then zero. batch normalization is a relatively new concept that was introduced after the vgg model. it is recommended to do this process for every model. this adds a normalization layer which helps the model to converge much faster in training. this in turn allows us to use higher learning rates. another common measure used to stop overfitting is to use dropout. this randomly sets the activations of neurons to 0. after taking into account the aforementioned steps, a new model was created which uses l2 regularization, batch normalization, and dropout layers. these help the model break past the previous highest accuracy by a large margin. we were able to achieve and accuracy of 66%. figure 10 shows the performance of our model in different epochs. figure 10. new model performance per epoch international journal of applied sciences and smart technologies volume 3, issue 1, pages 65–82 p-issn 2655-8564, e-issn 2685-9432 78 we can see by comparing figure 8 and figure 10 (previous model and current model) that the amount of overfitting is less in the new model. the accuracy has also increased significantly. 5.2. colour-based follow function this follow function was tested in two different environments; indoors and outdoors. due to the lack of proximity sensors in our robot we were worried about it crashing into other objects. however due to the colour-based following, these issues were not encountered. the robot simply did not move towards an object if it was not the required color. due to our choosen color being rarely encountered in the everyday environment, we did not experience any unwanted behaviour during our testing. in figure 11 we can see the robot’s view of the follow routine. we have stuck a fluorescent yellow coloured paper to the shoe to emulate a sticker or an entire shoe that can be an accessory for the robot. in this example, we observe that the size of the circle is relatively small. this would prompt the robot to move towards the user. depending on further movements from the user, the robots movements will also chance. figure 11. real time follow functionality international journal of applied sciences and smart technologies volume 3, issue 1, pages 65–82 p-issn 2655-8564, e-issn 2685-9432 79 6 results and discussion our results are provided in this section consisting of colour-based follow function and real time results for emotion recognition. 6.1. colour-based follow function using opencv and a normal camera we have created a basic distance measurement system. this enables the robot to judge how far or close it is to our desired colour in the frame. our system is able to perform the required function in both indoor and outdoor environments provided there is ample illumination. one of the inherent issues with the system is that when there is presence of more than one object with the desired colour, the robot cannot distinguish between the two. we would recommend using another kind of tagging mechanism which the robot can follow. for example, using rfid tags would prevent the robot from following any unwanted objects. 6.2. real time results for emotion recognition after we have saved the model, we need to run this model for real time data. the real time module begins with the computer receiving the video feed from the pi camera mounted on the raspberry pi 4. haar cascade is used for finding a face in the individual frames of the video feed. if a face is not found in this step, haar cascade is run again on the subsequent frames till the face is found. in the meanwhile, the camera that is mounted on servos begin to sweep its 1800 field of view in a systematic way. if a face is found in this sweep, the motion stops and a snapshot of the frame is taken for further processing. the face is cut out from the entire frame and the resulting image is converted to monochrome and then resized to 48 x 48 pixels before it can be fed to the model for making a prediction: 1. a picture is captured from the camera (see figure 12). 2. the face is extracted using haar-cascade and converted to gray scale (see figure 13). 3. this is then fed to the model and the model returns the predicted emotion (see figure 14). international journal of applied sciences and smart technologies volume 3, issue 1, pages 65–82 p-issn 2655-8564, e-issn 2685-9432 80 4. the final prediction can be seen on the screen (see figure 15). figure 12. emotion recognition input image figure 13. grayscaling and face cutout figure 14. emotion recognition output international journal of applied sciences and smart technologies volume 3, issue 1, pages 65–82 p-issn 2655-8564, e-issn 2685-9432 81 figure 15. detecting happy emotion with face recognition 7 conclusion the proposed algorithms act as some of the functions that our ai pet robot can perform. it is capable of differentiating between a known person and an unknown person. this will allow implementation of further functions such as "sentry mode" where the bot could sound an alarm if there is presence of unknown people. the robot is also capable of recognizing the emotions of the user. this could be a useful feature for someone undergoing therapy or counselling. finally the follow functionality can allow the user to take the ai pet robot on walks like he would take any other pet, for example a dog. references [1] center for disease control and prevention, healthy pets, healthy people, https://www.cdc.gov/healthypets/health-benefits/index.html, accessed: 12/01/2020. [2] e. paul cherniack, m. d and ariella r. cherniack, “assessing the benefits and risks of owning a pet.” canadian medical association journal, 2015. [3] p. viola and m. jones, “rapid object detection using a boosted cascade of simple features.” proceedings of the 2001 ieee computer society conference on computer vision and pattern recognition, 1, 2001. international journal of applied sciences and smart technologies volume 3, issue 1, pages 65–82 p-issn 2655-8564, e-issn 2685-9432 82 [4] challenges in representation learning: facial expression recognition challenge, https://www.kaggle.com/c/challenges-in-representation-learning-facial-expressionrecognition-challenge/data, 2013. [5] n. poornadithya c., p.c. chengappa, t. raman, s. pandey and g. k. shyam, “emotion identification and classification using convolutional neural networks.” international journal of advanced research in computer science, 2018. [6] a. rosebrock, ball tracking with opencv, https://www.pyimagesearch.com/2015/09/14/ball-tracking-with-opencv/, 2015, accessed 02/02/2020. [7] r. ruizendaal, deep learning #3: more on cnns handling overfitting, https://towardsdatascience.com/deep-learning-3-more-on-cnns-handlingoverfitting-2bd5d99abe5d, 2017, accessed 23/02/2020. [8] r. khandelwal, l1 and l2 regularization, https://medium.com/datadriveninvestor/l1-l2-regularization-7f1b4fe948f2, nov. 4, 2018, accessed 23/02/2020. international journal of applied sciences and smart technologies international journal of applied sciences and smart technologies volume 1, issue 2, pages 129–146 p-issn 2655-8564, e-issn 2685-9432 129 microcontroller based simple water flow rate control system to increase the efficiency of solar energy water distillation elang parikesit 1,* , wibowo kusbandono 2 , fa. rusdi sambada 2 1 politeknik mekatronika sanata dharma, yogyakarta, indonesia 2 department of mechanical engineering, faculty of science and technology, sanata dharma university, yogyakarta, indonesia * corresponding author: elang@pmsd.ac.id (received 03-05-2019; revised 08-06-2019; accepted 08-06-2019) abstract the current problem of solar energy water distillation is in its low efficiency. low efficiency is caused by inefficient water evaporation processes. increasing the efficiency of water evaporation is done by controlling the rate of water entering into the absorber. the commonly used mechanical control system still has weaknesses such as the instability of the water entering the absorber. this causes less effective evaporation of water so that the resulting distillation efficiency is not optimal. the water rate input system for distillation in this study is based on a simple microcontroller. the microcontroller-based input water rate control system allows the rate of input water with a small but continuous flow rate so that the water evaporation process can be more effective. this study aims to improve the efficiency of solar energy water distillation by increasing the efficiency of the water evaporation process through controlling the flow rate of water inlet. the research was carried out by the experimental method. the parameters varied were: the rate of input water which was 0.3 , international journal of applied sciences and smart technologies volume 1, issue 2, pages 129–146 p-issn 2655-8564, e-issn 2685-9432 130 0.5 and 1.2 . parameters measured in this study were: (1) temperature of absorber, (2) temperature of the cover glass , (3) temperature of cooling water, (4) input water temperature, (5) ambient air temperature, (6) distilled water results, (7) solar energy coming in and (8) time of recording data. the results showed that the production of distillation water using microcontroller-based water rate control was a maximum of compared to the model without water rate control at a water flow rate of 0.3 liters / hour, with distillation efficiency of . from the results of this study it can also be concluded that microcontroller based water flow rate controller is more stable than mechanical water flow controller, especially in small flow. keywords: microcontroller, rate of input water, distillation of water, solar energy 1 introduction clean water is one of the basic needs of every living thing, but there are still many regions that do not have enough clean water supply, even though the water supply is abundant. the river area is one example that the area has abundant water supply, but there is still a small supply of clean water. the water supply in the area has been contaminated with a lot of dissolved harmful substances. therefore, a water purification process is needed in one of the processes using distillation. in the distillation process, there are two main processes, namely evaporation and condensation. factors that can improve the evaporation process include expanding the surface of the liquid, flowing air over the surface, reducing pressure and by heating liquid. while condensation factors, namely temperature, pressure and humidity. the water distillation process starts from evaporation of contaminated water and then condenses on the cover glass. evaporation does not carry contaminated substances, so the condensed water is suitable for consumption. the problems that exist in the distillation of solar energy water are the low performance. this is due to the lack of effective evaporation and condensation international journal of applied sciences and smart technologies volume 1, issue 2, pages 129–146 p-issn 2655-8564, e-issn 2685-9432 131 processes. the type of distillation that is widely used is the type of tube absorber and the type of fabric absorber. the type of absorber tub is the simplest type of distillation, but the performance produced by this type is among the lowest. the low performance of distillation type absorber tub due to the amount of mass of water that is quite a lot in the tub resulting in the evaporation process does not take place quickly. the type of fabric absorber has better performance than the type of tub absorber. this is due to the type of fabric absorber, the water that will be distilled is flowed to the fabric so that it will produce a thin layer of water on the fabric and cause the water to evaporate faster. the important thing to get a thin layer on the fabric is to regulate the flow rate of water entering the absorber. setting the flow rate must be able to produce a constant flow at a small flow rate. setting the flow of absober water in cloth distillation is generally carried out mechanically for example using a tap. mechanical flow rate regulation using a tap cannot produce a constant flow especially at a small flow rate. this study will overcome these problems by using flow settings based on microcontroller. 2 literature review the performance of a solar energy distillation device is determined by the amount of clean water that can be produced, based on the variations used [1]. many factors that influence the amount of distilled water produced include: the effectiveness of absorber in absorbing solar energy [2], the effectiveness of glass in condensing water vapor [3], the amount of mass / volume of water contained in distillation devices, the surface area of water to be distilled length of heating time, and temperature of water entering the distillation device [4]. absorber must be made of material with good absorptivity of solar energy, to increase absorptivity generally the absorber is painted in black. the cover glass should not be too hot because if the glass is too hot the steam will be difficult to condense. the amount of mass / volume of water in a distillation device should not be too much because it will prolong the heating process. the flow rate that is too large will cause the evaporation process to be ineffective, but if it is too small then the distillation tool will be easily damaged due to overheating. therefore, it is necessary international journal of applied sciences and smart technologies volume 1, issue 2, pages 129–146 p-issn 2655-8564, e-issn 2685-9432 132 to regulate a good mass flow rate. the setting of the mass flow rate commonly used is a mechanical controller using a tap. mechanical water flow control has a disadvantage, namely the unstable water flow rate. this is due to the flow with the mechanical arrangement easily blocked by the presence of water vapor that appears on the valve tap. basically this research aims to overcome the weaknesses in the mechanical water rate control system using a microcontroller based flow rate controller. 3 method experiments are carried out indoors using lamp heat energy as a simulation of solar thermal energy. in experimental data retrieval, several variables used for analysis will be measured. these variables are: temperature absorber in the distillation model , glass temperature , lamp heat energy , the amount of distilled water produced (md, liter), the area of distillation equipment and cloth discharge (incoming discharge of distillation equipment; ). in detail, the steps of this research experimentally are : 1. prepare a distillation device namely a type of distillation cloth with insulated cloth (figure 1). 2. preparing measuring instruments to be used include temperature sensors, level sensors, solar meters, arduino microcontrollers, and stopwatches. 3. regulate the discharge of cloth (the discharge into the distillation apparatus) is . 4. record temperature absorber in the distillation model (t.w), glass temperature (t.c), amount of distilled water produced ( ) and heat energy from the infrared lamp ( ) every minutes for 8 hours. 5. repeat steps 2, 3, and 4 with variations in the flow rate of and . 6. perform data analysis compared to the results of distillation water and efficiency resulting from variations number 1, 2, and 3. international journal of applied sciences and smart technologies volume 1, issue 2, pages 129–146 p-issn 2655-8564, e-issn 2685-9432 133 figure 1. distillation with fabric and cooler spray data collection for each variation was carried out for 3 days and within a day for 8 hours. data recording is done by sensors arranged with a microcontroller, so data can be collected every minute. data analysis and discussion of the phenomena that occur is done by making a comparison chart of the increase in water yield per 40 minutes for 8 hours of data collection for each variation. after data collection and data analysis is complete, the research is continued with the compilation of data results and processing, drawing conclusions and suggestions. the efficiency of solar energy distillation equipment is defined as the ratio between the amount of energy used in the evaporation process of water and the amount of solar radiation that comes during a certain time. the efficiency of a distillation device consists of theoretical and actual efficiency. theoretical efficiency ( theoretical) is defined as the ratio of the amount of energy used to raise the temperature of a number of water masses in a distillation device based on theoretical data (using solar thermal energy). where as the actual efficiency ( actual) is defined as the ratio between the amount of energy used to raise the temperature of a number of water masses in a distillation device based on research data collection (using lamp heat energy). the actual efficiency ( actual) can be calculated by equation 1 and with md is the result of distilled water (liter) is the discharged of cloth, hfg is latent heat of water , ac is the area of distillation ( ), is heat energy lamp ( ). ∫ (1) international journal of applied sciences and smart technologies volume 1, issue 2, pages 129–146 p-issn 2655-8564, e-issn 2685-9432 134 the efficiency of the actual distillation tool can be calculated by md is the result of distilled water ( ), hfg is the latent heat of evaporation ( ), ac is the distillation area ( ), gt is the heat energy of the infrared lamp ( ), dt is heating time (seconds). the control of water flow that will be used in this study is the arrangement of microcontroller and mechanical water flow (as a comparison). figure 2 shows a block diagram of the water rate control system along with the data acquisition. the system consists of 3 inputs (flow speed setting, water level sensor, and real time clock). while the output is to drive the motor at the peristaltic pump. the function of this system is to control flow rate of water, read the water level that has been achieved, and save the results into memory. figure 2. block diagram of a water flow controller a peristaltic pump is used in this study to regulate the water input rate (figure 3) which is controlled by a microcontroller. the microcontroller used is atmega 328 in the arduino platform, which consists of hardware and software. the hardware consists of an on-board processor and . while the software consists of the program and boot loader. in this system an arduino uno board which has 20 digital output and input pin is used, which consists of pin to (14 pieces) and added pin (6 pieces). pin can be used digitally as in the program. especially for pin d0 and d1, they are used as communication line to computer [5]. international journal of applied sciences and smart technologies volume 1, issue 2, pages 129–146 p-issn 2655-8564, e-issn 2685-9432 135 figure 3. physical appearance of a peristaltic pump peristaltic pumps are positive displacement pumps that are usually used to pump fluids. this pump works by moving a wheel that presses a flexible hose to move fluid [6]. figure 4 shows the part in the peristaltic mechanism. figure 4. inner view of peristaltic mechanism several studies related to regulating the flow of water with small discharges have been widely carried out. for example to regulate the flow of hospital equipment such as infusion pumps or flow control in the desalination process. caraballo [7] has conducted research on the use of peristaltic motors with arduino microcontroller boards to regulate water flow in the desalination process at a low cost. banerjee et al. [8] also used a microcontroller and peristaltic pump on a dispenser system for mixing 2 types of microfluidic fluid. international journal of applied sciences and smart technologies volume 1, issue 2, pages 129–146 p-issn 2655-8564, e-issn 2685-9432 136 figure 5. physical appearance of etape sensor the water rate control system made in this study is open loop, so the accuracy of the results depends on calibration. the way the water rate control system works is as follows: potentiometer is used to adjust the amount of input voltage to the microcontroller (0-5 volts). based on the input voltage the microcontroller will regulate the output voltage to the motor. the size of the voltage given to the motor will determine the motor's rotational speed. the pulse width modulation (pwm) signal with a value of from arduino will determine the value of the voltage ( volts) given to the motor, the greater the voltage, the faster the motor rotation. the output current of the microcontroller is relatively small (maximum 40 ma). in order to be strong enough to move the motor, the analog output current of the microcontroller must be amplified, in this case using a transistor as an amplifier. the rotation of the dc motor will move the peristaltic mechanism (pulse suppression). for data retrieval (data logging), a memory module (sd card) is added to the microcontroller. real time clock (rtc) module is also added to provide real time values. while the water level is read by the etape water level sensor. etape from miletone technologies is a solid state sensor for measuring fluid height [9]. the etape liquid level sensor is an innovative solid state sensor that does not use mechanical buoys as in general, but uses printed electronics. etape gets hydrostatic pressure by the fluid where it is immersed, and produces a change in resistance corresponding to the distance from the top of the sensor to the surface of the fluid. the physical form of the etape sensor is shown in figure 5. international journal of applied sciences and smart technologies volume 1, issue 2, pages 129–146 p-issn 2655-8564, e-issn 2685-9432 137 e-tape can be modeled as a variable resistor . in operation, when the liquid level rises the resistance will decrease. if the water level drops resistance will rise. the typical output characteristics of the etape sensor are shown in figure 6 below : figure 6. plot of water height vs resistance of an etape sensor figure 7. voltage divider circuit with an etape sensor international journal of applied sciences and smart technologies volume 1, issue 2, pages 129–146 p-issn 2655-8564, e-issn 2685-9432 138 figure 7 shows how etape sensor installation on the network. at this circuit, the voltage is proportional to the resistance of he etape sensor, thus : (2) figure 8. water flow rate control and data acquisition circuit international journal of applied sciences and smart technologies volume 1, issue 2, pages 129–146 p-issn 2655-8564, e-issn 2685-9432 139 figure 9. flowchart of flow rate control the electronic circuit for controlling water rate and data acquisition is shown in figure 8. while the flow diagram of how the system works is shown in figure 9. 4 results and discussion the results achieved by this study are the completion of initial data retrieval. model enhancements have also been made. the problem with the creation of a water rate international journal of applied sciences and smart technologies volume 1, issue 2, pages 129–146 p-issn 2655-8564, e-issn 2685-9432 140 control model is the small motor torque at low speed. this is because the current entering the pump motor is still low while the voltage is reduced to reduce motor rotation. this problem causes the motor to not be strong enough to pump at low speeds so that small discharges are difficult to achieve. this problem can be overcome by adding obstacles to the suction hose. the obstacles used are cloth. the fabric resistance allows the pump to produce a small discharge at a rotation that is not too small. the results of data retrieval using the model have shown results in accordance with the initial hypothesis. the results of data collection showed that the results of distillation water using the control of the water rate were higher than the model without control of the water rate. the results of data collection can be seen in table 1. table 1. distillation output and efficiency for various flow rate d u ra ti o n ( m in u te s) distillation output (kg) efficiency (%) first flow rate first flow rate first flow rate first flow rate first flow rate first flow rate c o n tr o l w it h m ic ro n tr o ll e r m e c h a n ic a l c o n tr o l c o n tr o l w it h m ic ro n tr o ll e r m e c h a n ic a l c o n tr o l c o n tr o l w it h m ic ro n tr o ll e r m e c h a n ic a l c o n tr o l c o n tr o l w it h m ic ro n tr o ll e r m e c h a n ic a l c o n tr o l c o n tr o l w it h m ic ro n tr o ll e r m e c h a n ic a l c o n tr o l c o n tr o l w it h m ic ro n tr o ll e r m e c h a n ic a l c o n tr o l figure 10 shows the results of distillation water at various water flow rates of . the water distillation model that uses a water flow rate controller with a microcontroller can produce far more distilled water. up to minutes to 365 distillation models that use mechanical settings (taps) do not produce distilled water. this is due to the occurrence of problems in the mechanical flow settings. a common problem especially at small flow rates is the cessation of water flow that will enter the distillation model. the cessation of flow in the mechanical control, especially in small streams, is due to the presence of water vapor which clogs the canal. international journal of applied sciences and smart technologies volume 1, issue 2, pages 129–146 p-issn 2655-8564, e-issn 2685-9432 141 figure 10. comparison of distillation output at flow rateof at the water flow rate of or in variation number 2 (figure 11), the results of distilled water models with water flow rate controller using microcontroller is bigger than the distillation models without adjusting the flow rate with microcontroller or using mechanical controller (faucet). in contrast to variation number 1, it appears that a water distillation model with a mechanical control has produced distilled water in the 150th minute. this is because the water flow rate of distillation in variation number 2 is greater than variation number 1. at the higher water flow rate there is fewer problem compared with mechanical flow rate control at the smaller flow rate. the problem of mechanical control in variation number 2 is to reduce the water flow rate to the initial setting. the reduced flow is generally also caused by the appearance of water vapor which blocks the flow of water that will enter the distillation model. international journal of applied sciences and smart technologies volume 1, issue 2, pages 129–146 p-issn 2655-8564, e-issn 2685-9432 142 figure 11. comparison of distillation output at flow rate of figure 12. comparison of distillation output at flow rate of international journal of applied sciences and smart technologies volume 1, issue 2, pages 129–146 p-issn 2655-8564, e-issn 2685-9432 143 figure 13. efficiency comparison of a distillation model at flow rate of figure 14. efficiency comparison of a distillation model at flow rate of in variation number 3 by setting the initial flow rate of it is seen that the distillation model with a mechanical arrangement begins to produce distilled water from the 90th minute (figure 12). in variation number 3 the results of distillation of water models with microcontroller settings still produce more distilled water than the distillation model with mechanical settings. there is problem with mechanical control, when the water flow rate is large enough, the water flow rate increases from the initial international journal of applied sciences and smart technologies volume 1, issue 2, pages 129–146 p-issn 2655-8564, e-issn 2685-9432 144 setting. this causes the evaporation process is not optimal so that the results of distillation water are also small. the efficiency produced in the three variations shows a linear value with the results of distilled water produced by each variation. this can be seen from figures 13, 14, and 15. figure 15. efficiency comparison of a distillation model at flow rate of figure 16. comparison of distillation output at the flow rate of ; and international journal of applied sciences and smart technologies volume 1, issue 2, pages 129–146 p-issn 2655-8564, e-issn 2685-9432 145 figure 17. efficiency comparison of distillation output at flow rate of ; and in general, the increase in distillation water yield due to the use of microcontrollerbased intake of water flow can be seen in figure 16. the biggest increase in distillation water results with microcontroller-based water flow control compared to mechanical settings is . the biggest increase occurs at the planned initial intake water flow of (water flow rate enter the smallest distillation). the highest efficiency produced by microcontroller-based distillation is at a water flow rate of (figure 17). 5 conclusion the conclusion that can be taken in general this research is that the results of distillation water using microcontroller-based water rate control is a maximum of compared to the model sans water rate control , with distillation efficiency of . from the results of this study it can also be concluded that microcontroller based water speed control is more stable than mechanical water flow control, especially in small flow. international journal of applied sciences and smart technologies volume 1, issue 2, pages 129–146 p-issn 2655-8564, e-issn 2685-9432 146 acknowledgements this work would not have been possible without the financial support from dp2m dikti which has funded this penelitian dosen pemula project. thank you to the relevant parties in the process of conducting this research, dp2m dikti, politeknik mekatronika sanata dharma which have supported this research. references [1] h. m. ahmed, a. k. al taie, and m. almea, “solar water distillation with a cooling tube,” international renewable energy congress, pages 6 10, november 2010. [2] t. j. jansen, teknologi rekayasa surya, pt pradnya paramita, jakarta, 1995. [3] a. j. n. khalifa and a. m. hamood, ”experimental validation and enhancement of some solar still performance correlations,” desalination and water treatment, 4 (13), 311 315, 2009. [4] d. w. medugu and l. g. ndatuwong, “theoretical analysis of water distillation using solar still,” international journal of physical sciences, 4 (11), 705 712, 2009. [5] m. banzi and m. shiloh, getting started with arduino: the open source electronics prototyping platform, maker media, sebastopol, 2015. [6] m. w. volk, pump characteristics and applications 2nd edition, crc press, boca raton, 2005. [7] g. caraballo, “an arduino based control system for a brackish water desalination plant,” master thesis, university of north texas, denton, 2015. [8] n. banerjee, s. mukherjee, a. mitra, a. sanyal, and s.t mandal, “arduino based liquid dispensor system using peristaltic pump,” b. s. project, west bengal university of technology, kolkata, 2017. [9] https://milonetech.com/p/about-etape (accessed on 25-05-2019). international journal of applied sciences and smart technologies international journal of applied sciences and smart technologies volume 3, issue 1, pages 35–54 p-issn 2655-8564, e-issn 2685-9432 35 iot based smart classroom prajas kadepurkar1, prim dsouza1,*, nivya jomichan1 1department of computer engineering, xavier institute of engineering, mahim, mumbai, maharashtra, india *corresponding author: prim.prd@gmail.com (received 22-07-2020; revised 09-08-2020; accepted 23-08-2020) abstract a classroom is a place where there is always room for development; therefore just as the development for a student results in the ease of living, similarly a smart classroom focuses on the structural development leading to effective time and energy utilization. this project offers three major upgradation in classroom; the first being able to book a classroom dynamically using raspberry pi and toggling lights and fans using nodemcu and a mobile application which also helps notify a student about the subject of the lecture, the time and the venue related to the commencement of a period. the second section of this project inputs the attendance of a student check in, using a portable real-time biometric system whose data can further be used to calculate the attendance statistics of each student which can be viewed by the respective student or the teachers can keep a track on their own assigned class using the mobile application. and the last part of the project focuses on keeping track of all the lights and fans, which are on after the lecture. after the lecture is done, it will check the status of the room, whether some other teacher for some other subject is using the lecture hall. also when the teacher ends the class, the lights and fans will be switched off after 5 minutes buffer provided for the students to international journal of applied sciences and smart technologies volume 3, issue 1, pages 35–54 p-issn 2655-8564, e-issn 2685-9432 36 checkout. it will wait for 15 minutes and if it still does not receive any request then it will switch off the lights and fans of that particular lecture hall. keywords: raspberry pi, nodemcu, portable, real-time biometric system 1 introduction in most of the colleges and universities, we have witnessed scenarios where the lights and fans of a classroom are turned on even if there is no person in the class or a similar scenario where a small group of people are sitting in one corner of a classroom where the lights and the fans in the whole classroom are switched on. these scenarios account to a great deal of electricity wastage. as mentioned in [1], without adequate electricity, it becomes challenging for a person towards concentrating on their professional work or study and hence, current scenario insist towards highly efficient and effective usage of any form of power in educational institutes. suppose the working hour of an institution is 7 hours, let us assume that the lights and fans are switched on for most of it. this implies that a total of 4830w is being consumed in 7 hours. this means that the institution consumes 1110.9 kw per month whereas, with a smarter system it can be reduced to 634.8 kw per month (detailed calculation shown further under results). the traditional attendance system in an educational institute makes the teacher shouts out the roll numbers in the class and mark the attendance of a student on a paper upon acknowledgement of the student. this system proves to be inefficient in a lot of ways. firstly, calling out the roll numbers in front of a class is labor expensive. secondly, marking the attendance on a paper wastes a lot of paper and losing this piece of paper essentially means there is no record of the student’s attendance. consider 10 subjects per year; 4 attendance sheet per class per academic year, i.e. 40 sheets per year per class. let’s say there are 12 classes in a given institution, this implies that with a smarter system we can save 480 sheets per year. lastly, a student can fraudulently acknowledge for a student not present in the class. in essence, the traditional way of marking attendance can prove to be inefficient in terms of labor work, time, paper and security. considering the above two scenarios, we developed a system where one part of international journal of applied sciences and smart technologies volume 3, issue 1, pages 35–54 p-issn 2655-8564, e-issn 2685-9432 37 the system focuses on automated control over the electrical components in a classroom where these components of a particular classroom or lecture hall will be turned on only when a lecture is taking place in the aforementioned classroom or lecture hall which helps in saving power. this solves one of the two previously mentioned problems. the second part of our system helps reduce the difficulties encountered in the traditional attendance system. in this portable attendance system, attendance of a student is marked using a biometric sensor that prevents fraudulent entries and saves labor work. further, the records are stored into a database, which reduces the risk of losing the records. also, the system being portable can be circulated among the students as the lecture is taking place that helps in saving time required after the teaching session as in traditional attendance system. 2 literature survey in this section we elaborate the automatic lighting and control system for classroom, a classroom scheduling service for smart classes, design and development of portable classroom attendance system based on arduino and fingerprint biometric, web-based student attendance system using rfid technology. 2.1 automatic lighting and control system for classroom. current scenario insists towards highly efficient and effective usage of any form of power in educational institutions like colleges and universities where we use power for our teaching in classroom or labs. it is common practice that most of us leave the classrooms or labs with air conditioner, fan and lighting on even if no students or faculty members present. these amount to unnecessary wastage of power, contributing to country’s energy resource. so accordingly, an automatic lighting and control using arduino for the efficient use of energy in classroom condition where we have divided the classroom intro grids has been developed. the system developed will control lighting in a particular area of classroom based on the presence of human using relay control compared to the one placed in ceiling which would switch on or off based on presence of human in room irrespective of position. in addition to relay control, we have international journal of applied sciences and smart technologies volume 3, issue 1, pages 35–54 p-issn 2655-8564, e-issn 2685-9432 38 also provided mobility and remote command execution to system using android mobile app via bluetooth to control lighting based on voice command [1]. 2.2 a classroom scheduling service for smart classes. a typical case study demonstrates that smartclass provides a new efficient paradigm to the traditional classroom scheduling problem, which could achieve high flexibility by software services reuse and ease the burden of educational programmers. evaluation results on efficiency, overheads and scheduling performance demonstrate the smartclass has lower scheduling over heads with higher efficiency [2]. 2.3 design and development of portable classroom attendance system based on arduino and fingerprint biometric. the objective of this paper is to design and develop a portable student attendance system used in educational institutions as well as to design a user friendly attendance mechanism especially for the lecturer which incorporates security criteria for the stored data. the design and development of a portable classroom attendance system based on fingerprint biometric is presented. this paper introduces a portable fingerprint based biometric attendance system which addresses the weaknesses of the existing paper based attendance method or long time queuing. the system helped to reduce many issues such as, denying the possibilities of cheating in recording the attendance, helps to ease the lecturers to keep track of students attendance, the encryption technique adds more security so there will be no anonymous fingerprint which is able to tamper with the recorded data, and the portability saves time in taking attendance instead of queuing in a line [3]. 2.4 web-based student attendance system using rfid technology the existing conventional attendance system requires students to manually sign the attendance sheet every time they attend a class. as common as it seems, such system lacks of automation, where a number of problems may arise. this include the time unnecessarily consumed by the students to find and sign their name on the attendance sheet, some students may mistakenly or purposely signed another student’s name and international journal of applied sciences and smart technologies volume 3, issue 1, pages 35–54 p-issn 2655-8564, e-issn 2685-9432 39 the attendance sheet may got lost. having a system that can automatically capture student’s attendance by flashing their student card at the rfid reader can really save all the mentioned troubles. the main idea behind the system is to capture student attendance in a semi-automated way where the students are required to flash their student card at the rfid reader upon entering the classroom. this way, the student id is instantly captured by the reader, after which the data is sent to the online server for recording purpose [4]. the aforementioned papers have been taken as a reference and a guide for our project as a collective idea consisting each feature presented by the individual papers. as presented in the paper on automatic light and control system for classroom [1], our iot based smart classroom has a system which would switch on or off the lights and fans of a classroom based on the lecture status as and when initiated by a lecturer. using the mobile application, the lecturer decides the class and lecture to be conducted and the lights and fans will be switched on if the class has started and switches off automatically when the lecturer ends the class. all the students are also notified regarding the lecture status, using the web application. this leads us to the need of smart class scheduling system to avoid collision and chaos. as highlighted in the paper on a classroom scheduling service for smart classes [2], the issues concerning classroom scheduling are vital and our system ensures minimal collisions. the lecturer can only select from a set of available lecture halls for them to initiate a notification regarding their lecture. such measures lead to a more dynamic and more convenient scheduling methodology. the iot based smart classroom provides the main functionality of integrating intelligent toggling of the physical electrical aspects of a class along with a portable biometric attendance system on a dynamically scheduled lecture hall. as mentioned in the paper on web-based student attendance system using rfid technology [4], a more secure, efficient way than a pen and paper system for attendance is using technology to do the required work than humans, but as present in this paper, the use of rfid has many drawbacks, some of them including false entry or non-portability in terms of scanning for attendance, which is why our project uses a fingerprint sensor as used in the paper on design and development of portable classroom attendance system based on international journal of applied sciences and smart technologies volume 3, issue 1, pages 35–54 p-issn 2655-8564, e-issn 2685-9432 40 arduino and fingerprint biometric [3], but rather making it portable using a raspberry pi, such that it can be circulated in the lecture hall. in our research, enhancements have been made in terms of security, cost, and performance, therefore making a classroom smart as the project suggests. in this project we are making use of nodemcu microcontroller board, which consist of esp8266 wi-fi enabled chip and raspberry pi single board computer. 3 system design the iot based smart classroom provides the main functionality of integrating intelligent toggling of the physical electrical aspects of a class along with a portable biometric attendance system on a dynamically scheduled lecture hall. in this project we are making use of nodemcu microcontroller board, which consist of esp8266 wi-fi enabled chip and raspberry pi which is a single board computer. 3.1 hardware processing unit the hardware processing unit is further divided into two major subunits. first subunit focuses on controlling the electrical components of the classroom and the second subunit is used for marking student attendance. the attendance handling subunit uses nodemcu development board / kit v1.0 (version 2) microcontroller whereas the electricity handling subunit use raspberry pi single board computer. the nodemcu microcontrollers act as a client to the local server which is described further. the raspberry pi acts as an admin to the firebase server and can directly communicate with the database. international journal of applied sciences and smart technologies volume 3, issue 1, pages 35–54 p-issn 2655-8564, e-issn 2685-9432 41 3.1.1 electricity handling unit figure 1. block diagram of electricity handling subunit figure 1 shows the block diagram for the working of electricity handling unit. each lecture hall will contain its own electrical processing unit with a unique id assigned to it. a parallel physical switch is provided to the electricity grid to manually control the switches as well. when the teacher starts or ends a lecture, the pi toggles the relay switch and controls the electricity grid. the raspberry pi is provided with an external power supply. along with the handling of electricity of the particular lecture hall, the raspberry pi also updates its local file and the database according to the attendance marked. the server in turn, also keeps validating the data on the database with the local file (.json extension) and updates it’s contents. international journal of applied sciences and smart technologies volume 3, issue 1, pages 35–54 p-issn 2655-8564, e-issn 2685-9432 42 figure 2. flowchart of electricity handling subunit a relay is an electrically controlled electromagnetic switch. it can be controlled with low level voltages like 5v from raspberry pi gpios or 3.3v from nodemcu pins. their main use is controlling circuits by a low power signal or when several circuits must be controlled by one signal. the input to a relay consists of vcc, usually 5v or 3.3v in some cases, gnd normally connected to negative supply, and inn pins where n is the number of channels the relay consists. the output consists of three pins which provide 2 different configurations to control circuits. one of the output pins is com which is the common pin. the other two output pins are no which stands for normally international journal of applied sciences and smart technologies volume 3, issue 1, pages 35–54 p-issn 2655-8564, e-issn 2685-9432 43 open and keeps the circuit open before the relay is triggered and nc which stands for normally closed which keeps the circuit closed before the relay is triggered. the flowchart of electricity handling subunit is shown in figure 2, and a two channel relay module is illustrated in figure 3. figure 3. two channel relay module [5] 3.1.2 attendance handling unit here we describe our attendance handling unit. this includes block diagram of attendance processing subunit, flowchart of attendance processing subunit, etc. figure 4. block diagram of attendance processing subunit international journal of applied sciences and smart technologies volume 3, issue 1, pages 35–54 p-issn 2655-8564, e-issn 2685-9432 44 figure 4 shows the block diagram for attendance processing system which uses a biometric sensor to verify each registered student’s fingerprint with its respective id and sends it to the raspberry pi to update in the local file and the database. the oled screen displays the information of the student once his/her attendance is marked. nodemcu is provided with a rechargeable power supply. when the device is circulated amongst the students, each student scans his / her fingerprint on the system. this fingerprint is then matched with database stored on the sensor. in reference to the matched id, a local json file is updated to mark the respective student as present and the matched id is sent wirelessly to the server. the name of the student, depending upon the id matched, is then displayed on the screen which acts as a validation for the student that his / her attendance has been recorded. the screen displays the message ‘try again’ if something goes wrong. figure 5 shows a flowchart of attendance processing subunit. figures 6 and 7 depicts r307 optical fingerprint scanner [6] and r307 optical fingerprint scanner [7], respectively. international journal of applied sciences and smart technologies volume 3, issue 1, pages 35–54 p-issn 2655-8564, e-issn 2685-9432 45 figure 5. flowchart of attendance processing subunit international journal of applied sciences and smart technologies volume 3, issue 1, pages 35–54 p-issn 2655-8564, e-issn 2685-9432 46 figure 6. r307 optical fingerprint scanner [6] figure 7. r307 optical fingerprint scanner [7] figure 8 shows the flowchart of the attendance server which runs on raspberry pi which keeps on checking for incoming requests and updates the database accordingly. international journal of applied sciences and smart technologies volume 3, issue 1, pages 35–54 p-issn 2655-8564, e-issn 2685-9432 47 figure 8. flowchart of attendance server on raspberry pi 3.2 software processing unit the software processing unit consists of a cross platform native mobile application. the application is developed using flutter sdk for dart programming language developed by google. flutter enables the development of cross platform mobile applications i.e. android and ios from a single codebase. it means that we only have to write a single piece of that, which further will be converted to native machine languages by the flutter sdk, to work on both the platforms. international journal of applied sciences and smart technologies volume 3, issue 1, pages 35–54 p-issn 2655-8564, e-issn 2685-9432 48 figure 9. mobile application login screen the above mentioned application is a role based application which consists of the following two roles: 3.2.1 professor a professor after logging in, can select the class and a lecture hall, given the status of both is free, to start the lecture. the above mentioned professor can also view the attendance statistics of a particular class in the application. the professor can view the list of students who are defaulters subject wise and the class teacher can see the defaulter list of the whole class with all the subjects. international journal of applied sciences and smart technologies volume 3, issue 1, pages 35–54 p-issn 2655-8564, e-issn 2685-9432 49 figure 10. professor’s mobile application interface international journal of applied sciences and smart technologies volume 3, issue 1, pages 35–54 p-issn 2655-8564, e-issn 2685-9432 50 3.2.2 student as a student, the user can login to the application by entering his / her credentials. when the professor starts the lecture, each registered student is notified about the details of the lecture to be conducted. after logging in, the user can view his / her attendance statistics for each class he has registered for. figure 11. student’s mobile application interface 4 results the user interface is shown in figure 9, 10 and 11. the registered professors and students can successfully log in to their accounts. professors are able to select the classroom, the subject and the batch of student registered for the given course, he or she may start or end the lecture and view the overall attendance of the students and the defaulter list for the class they teach. class teachers can also view the attendance of the entire class for all the courses the students have enrolled for. professors can also download the attendance statistical list as a csv file as shown in figure 12. international journal of applied sciences and smart technologies volume 3, issue 1, pages 35–54 p-issn 2655-8564, e-issn 2685-9432 51 figure 12. csv attendance file sample this results in the cut down of paper usage for keeping the record of attendance for each subject of each academic year as shown below • let’s consider that there are 10 subjects per year. • each class will require 4 attendance sheets per year. • therefore, 10 subjects × 4 sheets = 40 sheets per year per class. • for each batch from first year to the fourth year let’s say there are 12 batches in total, this implies that 40 sheets per year per class × 12 such batches = 480 sheets per year. hence, with our current system, there is no wastage of sheets for marking attendance. as for the electricity module, consider the current scenario, without the iot based smart classroom system, let’s say that the average college working hour is seven hours and since there is no system to toggle the electricity according to the occurrence of a lecture or any human presence, the following is the estimated electricity consumption: international journal of applied sciences and smart technologies volume 3, issue 1, pages 35–54 p-issn 2655-8564, e-issn 2685-9432 52 • 4830w in 7 hours (average time considering all the events when there is no one in the class as well). • so for 10 such classes in a college 4830w × 10 = 48300w in one day. • for one month, the electricity consumption will be 1110900w i.e. 1110.9 kw for 10 classes in 1 month. • the cost of electricity for this month will be rs. 5165 (per month). now, consider the following case where our system is being used in the institution, the cost of electricity for a month will be calculated as follows: • 2760w in 4 hours; since the system automatically switches of the fans and lights of a lecture hall, unnecessary wastage of energy is minimized. therefore the average time considered for a lecture hall in a day is 4 hours. • so for 10 such classes in a college 2760w × 10 = 27600w in one day. • for one month, the electricity consumption will be 634800w i.e. 634.8 kw for 10 classes in 1 month. • the cost of electricity for this month will be rs. 2915 (per month). clearly, the energy consumption and the cost reduces drastically for just one month (see figure 13), therefore our system helps conserve a lot of electricity in the long run with a decrease in the expense. figure 13. electricity consumption comparison 5 conclusion thus, we have proposed smart classroom system, which introduces automated control over electrical components in a classroom for energy saving, and a portable international journal of applied sciences and smart technologies volume 3, issue 1, pages 35–54 p-issn 2655-8564, e-issn 2685-9432 53 electronic attendance system based on fingerprint identification for efficient use of time and paper, reduce labor work, and even prevent fraudulent attempts to mark the attendance. the initial results are promising but the work is still ongoing as there is a lot of scope for further development. use of facial recognition to mark the attendance of the students, dividing each lecture hall into sections called grid with motor sensor to detect moving objects in order to toggle the sectional electricity module according to the presence of a person, developing student teacher portal on the app for further usability etc are some of the future scope of this project we are working on. references [1] s. suresh, h.n.s. anusha, t. rajath, p. soundarya and s.v. p. vudatha, “automatic lighting and control system for classroom.” international conference on ict in business industry government (ictbig), 1–6, 2016. [2] c. wang, x. li, a. wang and x. zhou, “a classroom scheduling service for smart classes.” ieee transactions on services computing x (x), 1–11, 2015. [3] n. i. zainal, k. a. sidek, t. s. gunawan, h. manser and m. kartiwi, “design and development of portable classroom attendance system based on arduino and fingerprint biometric.” the 5th international conference on information and communication technology for the muslim world (ict4m), 2015. [4] m. kassim, h. mazlan, n. zaini and m. k. salleh, “web based student attendance system using rfid technology.” ieee control and system graduate research colloquium (icsgrc 2012), 213–218, 2012. [5] 2 channel 5v relay module-wiki, http://wiki.sunfounder.cc/index.php?title=2_channel_5v_relay_module [6] interfacing fingerprint sensor (r307) with evive – fingerprint matching, https://thestempedia.com/tutorials/interfacing-fingerprint-sensor-r307-evivefingerprint-matching/ [7] how to use an oled screen (128 per 64)-hackster.io, https://www.hackster.io/misterbotbreak/how-to-use-an-oled-screen-128-per-64cb6e4d https://www.hackster.io/misterbotbreak/how-to-use-an-oled-screen-128-per-64-cb6e4d https://www.hackster.io/misterbotbreak/how-to-use-an-oled-screen-128-per-64-cb6e4d international journal of applied sciences and smart technologies volume 3, issue 1, pages 35–54 p-issn 2655-8564, e-issn 2685-9432 54 this page intentionally left blank international journal of applied sciences and smart technologies international journal of applied sciences and smart technologies volume 5, issue 1, pages 27-38 p-issn 2655-8564, e-issn 2685-9432 this work is licensed under a creative commons attribution 4.0 international license 27 smart technology in construction industry: opportunity during covid-19 pandemic putri fatimah1,*, ahyahudin sodri1, udi syahnoedi hamzah1 1school of environmental science, universitas indonesia *corresponding author: putri.fatimah@ui.ac.id (received 20-12-2022; revised 24-12-2022; accepted 30-12-2022) abstract the construction industry is a sector that plays an essential role in economic growth. the covid-19 pandemic is an uncertain situation that significantly affects humans and industries, including the construction industry. the operational construction projects are complex with various activities and involve a large of workers. prevention of the spread of covid-19 that changes lifestyles and improves technology adoption. this study examines the relationship between increased technology adoption and limited workers’ social interaction in construction projects during the pandemic. a questionnaire survey was conducted to construction workers in dki jakarta, 74 valid responses were collected and correlation analyses were performed with spss version 28. the result of this study indicated a significant and positive correlation between increased technology adoption and limited workers’ social interaction in a construction project during the pandemic. there are opportunities for the construction industry to implement a digital transformation, building information modelling (bim), and intelligence visualization technologies to cope with the impact of the covid-19 pandemic on construction activities. this study provides evidence that smart technologies application has a significant role in supporting the construction industry to mitigate the impact of the covid-19 pandemic and opportunities for continuous improvement towards post-pandemic. keywords: covid-19, construction, industry, smart technology 1 introduction world health organization (who) announced coronavirus disease (covid-19) as the name of a new disease [1]. covid-19 affected the world’s population, put pressure on the global health system, and was declared a pandemic by the world health organization (who)[2]. confirmed cases of covid-19 in indonesia until december 2022 fluctuate. based on the who dashboard, from 3rd january 2020 to 16th december 2020, there were 160,362 fatalities and 6,707,504 confirmed cases of covid-19 in http://creativecommons.org/licenses/by/4.0/ mailto:putri.fatimah@ui.ac.id international journal of applied sciences and smart technologies volume 5, issue 1, pages 27-38 p-issn 2655-8564, e-issn 2685-9432 28 indonesia [3]. the covid-19 pandemic has occurred for over two years and is still ongoing. this crisis has an impact on various sectors, including the construction industry. table 1 informs the impact of the covid-19 pandemic on the construction industry from previous study. table 1. impact of the covid-19 pandemic on the construction industry locations impact of the covid-19 pandemic the united kingdom construction companies suffered losses, financial constraints from clients and banks, delayed projects due to material deficiencies, extended project completion targets, and social distancing provisions became a major challenge in construction activities [4]. north america delays in material delivery result in material shortages, reduced efficiency and productivity, slowdown of ongoing projects and delays in new projects, additional costs, and workplace safety [5]. china management difficulties including stricter workplace supervision, difficulties in collaboration, reduced work efficiency, project delays, supply chain disruptions, longer material delivery times, temporary shutdown of construction sites, increased construction costs for covid-19 prevention, increased material costs, machinery costs, extended project time, and reduced project profits [6]. malaysia project delays, labor shortages and job losses, time overruns, cost overruns, company financial impacts, planning, and scheduling disruptions, movement restrictions, material price fluctuations, and uncertainty of company continuity [7]. decreased project productivity, increased compliance costs, and exposure international journal of applied sciences and smart technologies volume 5, issue 1, pages 27-38 p-issn 2655-8564, e-issn 2685-9432 29 of construction workers challenges to safety and occupational health [8]. indonesia impacted to project performance, implementation of health protocols requires additional cost, and project completion time [9]. significant impact on construction projects’ completion time, project budget, and occupational health and safety [10]. the emergence of a complex pandemic impact on various aspects encourages companies and individuals to survive in this crisis condition, including optimizing the potential opportunities. the adoption of information technologies and digital transformation has accelerated due to the pandemic. developing smart technologies significant impacts cost savings, quality improvement, and productivity gains on the project, which also supports the smart construction project approach to management [11]. a smart technologies system is integrated technologies used to monitor and manage environments or external system, improving human life and work, and accelerate the performance of routine or industrial operations [12]. iqbal et al. [13] state that the construction sector should pursue innovative and digital approaches to improve business operations and leverage opportunities. in the covid-19 pandemic context, research by yang et al. [6] related to the impact of the covid-19 pandemic on the construction industry in china, the result states that the positive impact of the covid-19 pandemic is the improvement of technology adaptation. on a similar point, li et al. [14] argue that the covid-19 pandemic brings several opportunities for developing smart construction technologies and innovations in construction management. in addition, organizations must allocate resources to survive the pandemic, and be prepared for any uncertainties that may occur in the future [13]. the opportunities are applied by improving technology adoption and innovation that can support construction operations during the covid-19 pandemic. this study examines the relationship between workers’ interaction and technology adoption in the construction project. this study also explores the smart technology that can be applied to improve the construction industry’s performance to cope with the pandemic. international journal of applied sciences and smart technologies volume 5, issue 1, pages 27-38 p-issn 2655-8564, e-issn 2685-9432 30 2 research methodology this study focused on the construction project in dki jakarta, based on data covid-19 task force in indonesia, that the distribution of confirmed covid-19 cases in indonesia is highest in dki jakarta [15], thus referred to as the epicenter of covid-19 cases[16]. this study was conducted at a private construction company (pt x) located in dki jakarta, based on the value of the project contracts undertaken, the company is a large category construction service company. the population in this study are workers who work on high-rise building construction projects in dki jakarta, where pt x is responsible as the main contractor. in order to examine the relationship between workers interaction and technology adoption, an online questionnaire survey was conducted for construction workers as the main data collection. data were collected from september to october 2022. the respondent criteria are construction project workers who had work experience in dki jakarta during the covid-19 pandemic. a simple random sampling method was employed with the calculation number of samples using the isaac and michael formula with an error degree of 5%. the minimum sample is 71 workers, and the actual survey collected 74 respondents. a questionnaire survey uses a likert 5-rating scale, with a rating of 1 expressing strongly disagree, and 5 expressing strongly agree. attitudes, opinions, and perceptions of social phenomena are measured using the likert scale [17]. the optimal number of alternative responses is five scales [18]. correlation analysis was used to evaluate the strength of the relationship between variables. correlation analyses were performed with ibm spss statistics version 28. kendall’s tau value interpretations include weak correlations (>0-0.25); moderate correlations (>0.25-0.5); high correlations (>0.5-0.75); very high correlation (>0.75-0.99); and value of 1 for perfect correlation [19]. 3 results and discussions 3.1 demographic characteristics of the respondents all respondents confirmed working on construction projects in dki jakarta during the covid-19 pandemic. according to table 2, almost 73% of respondents experienced international journal of applied sciences and smart technologies volume 5, issue 1, pages 27-38 p-issn 2655-8564, e-issn 2685-9432 31 more than 10 years, and other 27% have experienced more than 3 to 10 years. based on this result, it can be deduced that all respondents have experience in the construction project. various positions in the construction project were involved in this study, including project manager, site manager, hse, engineers, and other positions representing the perspective of general construction workers. detail of characteristics of respondents, such as work experience in the construction industry and job positions, are shown in table 2. table 2. demographic characteristics of respondents (n=74) characteristics frequency (f) percentage work experience in the construction industry >3 – 7 years 7 9.4% >7 – 10 years 13 17.6% >10 – 19 years 25 33.8% 20 years or more 29 39.2% job position project manager 4 5.4% site manager 14 18.9% engineer 11 14.9% commercial 8 10.8% hse 11 14.9% general affair 3 4% quality supervisor 11 14.9% quantity surveyor 4 5.4% drafter 8 10.8% source: primary data analysis, 2022 3.2 questionnaire data analysis correlation analysis of questionnaire data performed with software ibm spss. the output of the correlation analysis between increased technology adoption and limited workers’ social interaction during the pandemic shows in table 3. international journal of applied sciences and smart technologies volume 5, issue 1, pages 27-38 p-issn 2655-8564, e-issn 2685-9432 32 table 3. kendall’s tau correlation coefficient variables kendall’s tau correlation coefficient significance level increased technology adoption during the pandemic 0.375** <0.001 limited workers’ social interaction during the pandemic ** correlation is significant at the 0.01 level (2-tailed) source: primary data analysis, 2022 correlation analysis between increased technology adoption and limited workers’ social interaction during the pandemic, based on table 3 kendall’s tau correlation value is 0.375, this value means a positive relationship with moderate strength and significance at the 0.01 level (2-tailed). during the covid-19 pandemic, social distancing is included in the covid-19 spread prevention protocol. increased technology adoption can reduce face-to-face interaction and communication between workers to avoid the spread of covid-19, but also it can increase the efficiency of construction work and implement covid-19 prevention protocols [14]. several innovative technologies have been applied to enhance construction site performance and support the health and well-being of construction workers [6]. 3.3 smart technologies in the construction industry this section discusses some of the smart technologies opportunities that can be applied in the construction industry. moreover, supporting the survey result described in the previous section. digital transformation is recognized as a new paradigm of digital innovation, and information technology plays a vital role in the company, especially during the pandemic. leontie, maha, and stoian [20] argue that digitalization is the most helpful to cope with the pandemic situation. the construction sector had to prioritize digital transformation for sustainability, and despite initiatives to implement digital transformation to enhance productivity, the covid-19 pandemic has significantly increased digital transformation [21]. the digitalization of the construction sector is international journal of applied sciences and smart technologies volume 5, issue 1, pages 27-38 p-issn 2655-8564, e-issn 2685-9432 33 linked directly to the use of various information technologies known as construction technology 4.0 [20]. ben-zvi and j. luftman [22] argue that digital transformation brings great changes to human life, and the covid-19 pandemic drives these changes. another study has the same point, kamal [23] states that the adoption of digital technology is an opportunity during the covid-19 pandemic; digital technology has been proven to support productivity. digital transformation technologies help improve productivity, enhance safety and risk mitigation, high-quality buildings, and improve collaboration by collecting, analyzing, and using data from the entire supply chain in the construction industry [21]. one of the impacts of the pandemic that mainly occurs in construction projects is delays construction work, and delays in material supply, thus the efficient method is important to mitigate this impact. the building information modelling (bim) concept as a combination of methods and technology that organize the operational and support the management of the 3d models and other construction information in digital format during the period of the entire building’s life cycle [24]. bim is generally used by engineering positions to build design for each stage. bim provides visual tools and information on the operations and materials, improving the quality and consistency of project life cycle costs [25]. bim is a process, method, information project design, faster project management, cost-effective, and reduces environmental impact [26]. bim can help facilitate the construction process more efficiently, and bim-based design models can contribute to sustainable construction [27]. the application of bim can prevent design errors and effective use of materials, and reduce the potential for construction waste generation [28]. the architectural engineering and construction industries have the popularity of visualization technologies such as virtual reality, augmented reality, and automation technology using robots and intelligent algorithms [11]. artificial intelligence (ai) can address a productivity issue in the construction sector; throughout the building lifecycle ai utilize the data and leverages other technological capabilities to improve the construction process [29]. in context of the covid-19 pandemic, using technology will have benefits for improving the quality of learning [30]. visual technology can be adopted to promote health protocols and increase construction workers’ awareness of complying international journal of applied sciences and smart technologies volume 5, issue 1, pages 27-38 p-issn 2655-8564, e-issn 2685-9432 34 with covid-19 protocols, and audio-visual training can make it easier for workers to understand the training content. in other applications, biometric identity methods such as facial recognition can be used by attendance systems to prevent the spread covid-19 caused by physical touch [31]. smart recognition gates enable effective and efficient workforce management by recording attendance and preventing unauthorized visitors [11]. according to rafiq, alimudin, and rani [31], a face recognition attendance system has been implemented with the main concept when people face or objects appear then the camera will activate and show up on the monitor, and the temperature sensor will detect the object, the implementation has successfully achieved a high accuracy level of 80%. the application of smart recognition gates can develop with the application of the covid-19 protocol, for example measuring body temperature, and covid-19 vaccination status as screening when workers enter the project area. 4 conclusions construction workers’ interactions have been restricted due to the covid-19 pandemic. based on the result and discussion, it was shown that there is a significant and positive correlation between increased technology adoption and limited workers’ social interaction during the covid-19 pandemic. in short, the covid-19 pandemic accelerates technology adaptation in the construction industry. there are prospects for the construction industry to adaption smart technology such as digital transformation, building information modelling (bim), and intelligence construction technologies. the limitation of this study is that the survey was conducted to workers regarding technology adoption in general. therefore, the suggestion for further study is to conduct specific survey on the technology adoption based on each position, and cost-benefit analysis of smart technology. besides the limitation, this study provides evidence that applying smart technology support improvement in the performance of the construction industry to cope impact of the covid-19 pandemic and opportunities for post-pandemic. international journal of applied sciences and smart technologies volume 5, issue 1, pages 27-38 p-issn 2655-8564, e-issn 2685-9432 35 acknowledgments the authors would like to thank the private construction company in dki jakarta (pt x) for permitting this study and appreciate the participation of construction workers willing to be respondents and other parties involved in this study. references [1] world health organization, novel coronavirus(2019-ncov) situation report – 22, (2020), [online]. available: https://www.who.int/docs/defaultsource/coronaviruse/situation-reports/20200211-sitrep-22 ncov.pdf?sfvrsn=fb6d49b1_2. [2] m. usman, y. ali, a. riaz, a. riaz, and a. zubair, economic perspective of coronavirus (covid-19), j. public aff., 20 (4) (2020) 1–5, [3] world health organization, who health emergency dashboard indonesia situation, (2022), https://covid19.who.int/region/searo/country/id (accessed dec. 17, 2022). [4] a. shibani, d. hassan, and n. shak, the effects of pandemic on construction industry in the uk, mediterr. j. soc. sci., 2117 (2020) 48–60. [5] m. raoufi and a. r. fayek, identifying actions to control and mitigate the effects of the covid-19 pandemic on construction organizations: preliminary findings, public work. manag. policy, 26 (1) (2021) 47–55, [6] y. yang et al., opportunities and challenges for construction health and safety technologies under the covid-19 pandemic in chinese construction projects,” int. j. environ. res. public heal. artic., 18 (13038) (2021). [7] d. y. gamil and a. alhagar, the impact of pandemic crisis on the survival of construction industry : a case of covid-19, mediterr. j. soc. sci., 11 (4) (2020) 122–128. [8] a. olanrewaju, a. abdulaziz, c. n. preece, and k. shobowale, evaluation of measures to prevent the spread of covid-19 on the construction sites, clean. eng. technol., 5 (100277) (2021). [9] d. larasati, n. ekawati, s. triyadi, a. f. muchlis, and a. wardhani, impact of the international journal of applied sciences and smart technologies volume 5, issue 1, pages 27-38 p-issn 2655-8564, e-issn 2685-9432 36 pandemic covid-19 on the implementation of construction contracts, iop conf. ser. earth environ. sci., 738(1) (2021) 1–12, [10] r. susanti, s. fauziyah, and p. u. pramesti, lesson from pandemic covid-19 for sustainability construction in indonesia, aip conf. proc., 2447 (1) (2021) 30013. [11] m. xu, x. nie, h. li, j. c. p. cheng, and z. mei, smart construction sites: a promising approach to improving on-site hse management performance, j. build. eng., 49 (2021). [12] m. ion and g. căruțașu, smart technology, overview, and regulatory framework, rom. cyber secur. j., 2 (1) (2020), [online]. available: https://rocys.ici.ro/spring2020-no-1-vol-2/smart-technology-overview-and-regulatory-framework/. [13] m. iqbal, n. ahmad, m. waqas, and m. abrar, covid-19 pandemic and construction industry: impacts, emerging construction safety practices, and proposed crisis management framework, brazilian j. oper. prod. manag., 18 (2) (2021) 1–17. [14] z. li, y. jin, w. li, q. meng, and x. hu, impacts of covid-19 on construction project management: a life cycle perspective, eng. constr. archit. manag., (2022), [15] covid-19 handling task force, peta sebaran perkembangan kasus covid-19 di indonesia, (2022). https://covid19.go.id/en/peta-sebaran (accessed mar. 22, 2022). [16] r. e. caraka et al., impact of covid-19 large scale restriction on environment and economy in indonesia, glob. j. environ. sci. manag., 6 (2020) 65–84. [17] sugiyono, metode penelitian kuantitatif, kualitatif, dan r&d, 19th ed. bandung: alfabeta, (2013). [18] x. chen, h. yu, and f. yu, what is the optimal number of response alternatives for rating scales? from an information processing perspective, j. mark. anal., 3 (2) (2015) 69–78. [19] j. sarwono, path analysis dengan spss. jakarta: elex media komputindo, (2012). [20] v. leontie, l. g. maha, and i. c. stoian, covid-19 pandemic and its effects on the usage of information technologies in the construction industry: the case of romania, buildings, 12 (2) (2022). [21] s. kim, m. lee, i. yu, and j. w. son, key initiatives for digital transformation, green new deal and recovery after covid-19 within the construction industry international journal of applied sciences and smart technologies volume 5, issue 1, pages 27-38 p-issn 2655-8564, e-issn 2685-9432 37 in korea, sustain., 14 (14) (2022). [22] t. ben-zvi and j. luftman, post-pandemic it: digital transformation and sustainability, sustain., 14, (22) (2022) 1–11. [23] m. m. kamal, the triple-edged sword of covid-19: understanding the use of digital technologies and the impact of productive, disruptive, and destructive nature of the pandemic, inf. syst. manag., 37 (4) (2020) 310–317. [24] j. p. carvalho, l. bragança, and r. mateus, optimising building sustainability assessment using bim, autom. constr., 102 (2019) 170–182. [25] m. zoghi and s. kim, dynamic modeling for life cycle cost analysis of bim-based construction waste management, sustain., 12 (12) (2020) 2483. [26] j. čabala, m. kozlovská, z. struková, and a. tažíková, benefits of bim models and mixed reality in the implementation phase of construction projects, iop conf. ser. mater. sci. eng., 1252 (1) (2022) 012079. [27] s. soltani, the contributions of building information modelling to sustainable construction, world j. eng. technol., 4 (2) (2016) 193–199. [28] a. turkyilmaz, m. guney, f. karaca, z. bagdatkyzy, a. sandybayeva, and g. sirenova, a comprehensive construction and demolition waste management model using pestel and 3r for construction companies operating in central asia, sustain., 11 (6) (2019). [29] s. o. abioye et al., artificial intelligence in the construction industry : a review of present status , opportunities and future challenges, j. build. eng., 44 (2021). [30] h. suparwito, information technology and learning methodology amid the covid-19 pandemic, int. j. appl. sci. smart technol. 2 (2) (2020) 107–118. [31] a. a. rafiq, e. alimudin, and d. p. rani, employee presence using body temperature detection and face recognition, int. j. appl. sci. smart technol., 4 (2) (2022) 173–184. international journal of applied sciences and smart technologies volume 5, issue 1, pages 27-38 p-issn 2655-8564, e-issn 2685-9432 38 this page intentionally left blank international journal of applied sciences and smart technologies international journal of applied sciences and smart technologies volume 4, issue 2, pages 267-280 p-issn 2655-8564, e-issn 2685-9432 267 this work is licensed under a creative commons attribution 4.0 international license pre-service science teachers’ competence and confidence in scientific inquiry rohandi 1,* 1department of physics education, sanata dharma university, yogyakarta, indonesia *corresponding author: rohandi@usd.ac.id (received 05-11-2022; revised 20-11-2022; accepted 29-11-2022) abstract the quality of science learning has a very important role in science education. the quality of science learning is largely determined by the quality of science teachers. inquiry-based science learning is becoming a model that must be developed today. in order for inquiry-based learning to be carried out properly, science teachers must have adequate inquiry competencies. teachers must also have adequate confidence in conducting inquiry-based learning with students in the classroom. the objective of this study was to examine students’ competence and confidence in scientific inquiry. 42 pre-service science teachers were involved in this study. data collected were analyzed using rasch modeling. the results of data analysis show that mean of the rasch score for students’ competence (1.76 logits; sd = 1.20) is higher than the mean of the rasch score for students’ confidence (1. 41 logits; sd = 1.01). these results show that although students feel competent to do inquiry, they do not yet fully have the confidence to carry out inquiry learning with students in classroom activities. implications for a science teacher and pre-service science education based on these results are discussed. keywords: pre-service science teacher, scientific inquiry, learning science http://creativecommons.org/licenses/by/4.0/ mailto:rohandi@usd.ac.id international journal of applied sciences and smart technologies volume 4, issue 2, pages 267-280 p-issn 2655-8564, e-issn 2685-9432 268 1 introduction science education plays an important role in providing a good climate for students who have an interest in exploring science. the more students who are interested in science the more chances a scientist or researcher in science is born. the development of science cannot be separated from the development of scientific investigations by scientists. at the heart of the process of inquiry in the field of science is the process of hypothesis testing through a series of experiments. the learning science outcomes besides students can have an understanding of science, they need to have experiences and skills in terms of testing their ideas in solving a problem through the process of inquiry. in the inquiry-based learning process, students are trained to think and reason properly and correctly. engaging students in applying thinking and reasoning skills and promoting inquiry-based instruction has become the focus for many science educators. the process of inquiry promotes the exploration of questions raised by both students and the teacher. when the inquiry process skills are connected with science content, students discover meaningful concepts and understandings [1]. in this way, students will use these experiences to contribute to their identity in science in and outside the classroom and eventually a future career. in terms of students’ science identity, they are able to demonstrate performance in relevant scientific practices with deep meaningful knowledge and understanding of science, and recognize themselves and get recognized as science persons by others. students develop identities by engaging in science activities and in broader tasks in their community of practice in accordance with the science classroom. researchers have advocated for teachers to lead students to collaboratively solve problems in the context of real-world situations and students’ culture [2] instead of conducting validation experiments based solely on textbooks. the fundamental of science learning activities is the interaction between students and objects or phenomena in the form of material objects and objects of events/phenomena. students’ interaction with nature objects is not just to describe the situation, but further than that, it is hoped that at least it will be continued with generalization activities, which can develop international journal of applied sciences and smart technologies volume 4, issue 2, pages 267-280 p-issn 2655-8564, e-issn 2685-9432 269 students' cognitive and affective potential. understanding science begins with a specific problem space, where scientists must formulate a problem-solving plan, form a hypothesis, perform an experiment, and gather evidence to explain the problem [3]. similarly, students must learn to ask questions about specific issues and answer those questions based on evidence. they learn to explore, gather evidence from different sources, construct arguments, construct explanations based on available information, and communicate and defend their conclusions. in inquiry-based learning, teachers act as facilitators and students take greater responsibility for their learning. constructivism advocates that teachers help students think through and solve problems that require higher-order thinking and reconstruct their knowledge by interacting with the environment. inquiry-based learning is an effective method to achieve this objective. it will lead students actively seek knowledge and generate new ideas are key features of inquiry-based science learning [4]. the essential characteristics of inquiry are connecting personal knowledge and scientific concepts, designing experiments, discovering, and building meaning from data and observations [5]. this characteristic describes that implementing inquiry teaching lead students to enhance their science learning by processing personal experience and connecting new and old knowledge. in summary, inquiry-based learning engages students to question, design, and implement discovery, analyze, and communicate their findings to expand their knowledge. to ensure the achievement of the objectives of learning in science education, well prepared instructional design of learning science is needed so that it is guaranteed that students gain hands-on experience, and opportunities for conceptualization, and are trained to use science process skills in the inquiry process. such a learning objective in science education will be possible to achieve if the (prospective) science teachers have adequate inquiry competence in teaching science to students which will have a good impact on student learning as well as training students in conducting science investigations (doing science). the skills and knowledge of scientific inquiry enable science teachers to be successful in their teaching of science. in teaching science and accompanying students to learn science, science teachers need to do it with confidence. besides competence in inquiry, science teachers’ international journal of applied sciences and smart technologies volume 4, issue 2, pages 267-280 p-issn 2655-8564, e-issn 2685-9432 270 confidence in conducting and guiding students to learn through the inquiry process is an important factor in the successful teaching of science. confidence is a feeling of selfassurance arising from one's appreciation of one's own abilities or qualities. science teacher confidence in the inquiry will promote students' learning even when facing difficulties. teacher confidence in inquiry helps students feel ready for their inquiry activities and life experiences. when science teachers are confident in teaching through inquiry, they are more likely to move forward with students and opportunities to gain the students’ potential in learning science. in this study, pre-service teachers’ competence and confidence were investigated and the rasch model was used to analyze collected data. pre-service teachers’ competence and confidence will be mapped and discussed. some implications are formulated from the results of the study. 2 research methodology 2.1 sample this study aimed to investigate pre-service science students’ competence and confidence in inquiry at the school of teacher education. 42 students from the primary teacher education department who participated in the teaching and learning science course were involved in this study. 2.2 data collection an instrument used in this study was the questionnaire to measure students’ competence and confidence in inquiry in learning science developed by chang [6]. this questionnaire was designed to evaluate pre-service science teachers’ competence and confidence in the inquiry. the questionnaire consisted of 14 items for both competence and confidence in the inquiry. as this instrument had not been used in indonesia before, the questionnaire was first translated into bahasa. the students’ responses were categorized using a likert scale. the extreme categories in the likert scale are labeled ‘‘strongly disagree’’ (coded 1) and ‘‘strongly agree’’ (coded 4). these instruments were administered in the presence of a researcher who would provide assistance if the students encountered any difficulty. these instruments were distributed at the beginning international journal of applied sciences and smart technologies volume 4, issue 2, pages 267-280 p-issn 2655-8564, e-issn 2685-9432 271 of the teaching and learning science course. overall, the administration of the questionnaires proceeded smoothly, all students had sufficient time to complete the questionnaire. 2.3 data analysis the responses of pre-service teachers’ competence and confidence in scientific inquiry were analyzed using winsteps (rasch-model computer program). in order for the items can be used in the rasch model, the items infit mean square and outfit mean square should be distributed between 0.7 and 1.4, and the item point measure correlation should be greater than 0.3 [7]. the rasch model has been implemented in analyzing data in this research. the rasch model provides valuable data for the development, modification, and monitoring of valid measurement instruments. in this paper, the rasch model was used to examine students’ competence and confidence in inquiry in the primary teacher education department. the equal interval measures transformed by the rasch model are used to map persons and items into a linear (interval) scale. such mapping (called person–item maps) produces useful tools for evaluating students’ competence and confidence of students in the inquiry. the person–item maps of students’ competence and confidence of students in the inquiry provided ways for evaluating and interpreting the data. items ordered in person–item maps illustrate the level of item difficulties. this means that items which more difficult to agree with or items which easier to agree with can be identified. the rasch model explains how the pre-service teachers’ competence and confidence in scientific inquiry can predict a student’s response to a particular test item involving competence and confidence in scientific inquiry. students at the same logits value as an item have a 50% chance of correctly answering that item. items above their ability level can still be answered correctly, but students have less than a 50% chance of correctly answering the item. items listed below a student are those that the student has less than a 50% chance of correctly answering. consequently, the higher position of the item on the single line means that the item is more difficult to agree with. conversely, the lower position of the item on the single line means that the item is easier to agree with. international journal of applied sciences and smart technologies volume 4, issue 2, pages 267-280 p-issn 2655-8564, e-issn 2685-9432 272 3 results and discussions before the output of winsteps can be used for interpretation of the results of data analysis, all items used in the questionnaire are first evaluated (diagnosed) whether they meet the rasch model criteria. the evaluation (diagnosis) results showed that for competence items, the infit mean square was distributed between 0.49 and 1.40; the outfit mean square was distributed between 0.46 and 1.4 and the item point measure correlation is greater than 0.3. in the evaluation results for confidence items, the infit mean square was distributed between 0.64 and 1.36; the outfit mean square was distributed between 0.64 and 1.36 and the item point measure correlation is greater than 0.3. the results of this diagnosis indicate that all items can be used in the rasch model analysis. furthermore, all data (42 students and 14 items) were transformed using rasch analysis to order students along the continuum of the measure of competence and confidence in the inquiry. the distributions of students (n = 42) according to competence and of items (n = 14) according to the difficulty are shown in figure 1. on the left-hand side of figure 1, the distribution of students is represented. items located below a participant are items that the students were likely to agree to. items located above are items that the students were unlikely to agree to. the mean rasch score for students’ competence was 1.76 logits (sd = 1.20). by looking at which items are located above and below this point, we can understand the student’s average level of competence. whereas, the mean rasch score for items was 0.0 logits (sd=1.36). looking at the mean of rasch score on persons and items and their respective standard errors, students’ competence is compatible with the items difficulty score. it means that for this sample of students, their competence in inquiry could not be justified whether or not they tend to have more competence. however, the distribution of the items on the map has valuable information of students existing competence. figure 1 displays an item–person map of inquiry competence in which students are placed relative to the hierarchy of items. on the right side, items are listed in order of difficulty, with the hardest item to agree to at the top (item12) and the easiest item to agree to at the bottom (item13). international journal of applied sciences and smart technologies volume 4, issue 2, pages 267-280 p-issn 2655-8564, e-issn 2685-9432 273 figure 1. item map of pre-service teachers’ competence the mean rasch score for students’ confidence was 1.41 logits (sd = 1.01). the mean rasch score for items was 0.0 logits (sd=0.95). looking at the mean of rasch score on persons and items and their respective standard errors, students’ confidence is compatible with the items difficulty score. it means that for this sample of students, their confidence in inquiry could not be justified whether or not they tend to have more confidence. however, the distribution of the items on the map has valuable information on students existing confidence. figure 2 displays an item–person map of inquiry confidence in which students are placed relative to the hierarchy of items. on the right side, items are listed in order of difficulty, with the hardest item to agree to at the top (item12) and the easiest item to agree to at the bottom (item10). international journal of applied sciences and smart technologies volume 4, issue 2, pages 267-280 p-issn 2655-8564, e-issn 2685-9432 274 figure 2. item map of pre-service teachers’ confidence the results of data analysis show that mean of the rasch score for students’ competence (1 76 logits; sd = 1. 20) is higher than the mean of the rasch score for students’ confidence (1. 41 logits; sd = 1.01). these results show that although students feel competent to do inquiry, they do not yet fully have the confidence to carry out inquiry learning with students in classroom activities. in terms of inquiry competence shown in figure 1, students have difficulty in describing and interpreting data through scientific terminology (item12), controlling extraneous variables that may interfere with results (item7), and posing verifiable hypotheses according to data (item4). the three competencies require a high level of thinking skills. the results of this data analysis, show that students' thinking skills still need to be improved so that these three competencies can be done better. this difficulty is consistent with students' confidence in conducting inquiry learning. as shown in international journal of applied sciences and smart technologies volume 4, issue 2, pages 267-280 p-issn 2655-8564, e-issn 2685-9432 275 figure 2, students are least confident in using scientific terms learned to explain the meaning of experimental data (item12) in science class. in carrying out inquiry learning, it is also indicated that students are still not confident in choosing suitable study methods based on the question (item5), and considering possible factors that may influence the experiment (item6) in science class. the results of data analysis (see figure 1) also show that students are able to build conclusions according to collected data (item 13), infer according to collected data (item14), and conduct experiments according to a predefined plan (item8). these results are consistent with the analysis of students' confidence in conducting inquiry learning. students are very confident (see figure 1) in carrying out the experiment in accordance with the experiment's procedures (item 10). these results show and likely caused because students are very familiar with doing a practicum with a recipe model where all the things that must be done have been described in the practicum instructions. students just need to follow the procedures that have been compiled with the order of their activities. from the results of data analysis as outlined above, some implications in science learning in schools or in pre-service science education can be formulated as follows: • science learning activities in schools need to be directed to inquiry rather than prescription models. this is because, in this prescription model, students are given little or no opportunities to propose problems for investigation, ask questions, formulate hypotheses, design procedures, process answers, and explanations, predict and communicate results as well as identify assumptions, use logical and critical thinking and engage in argumentation. it is also to respond to the demands of learning for the 21st century that most of the learning goals of 21st century skills can be taught within the context of scientific inquiry or project-based learning which requires teachers to be able to engage students in self-directed strategies, to organize activities that delegate learning decisions to students and monitor their progress, to facilitate learning activities such as collective problem solving, and to guide students in thinking about complex problems by giving them feedback following assessment [8]. furthermore, chu et al. [8] suggest that delivering inquiry-based tasks is international journal of applied sciences and smart technologies volume 4, issue 2, pages 267-280 p-issn 2655-8564, e-issn 2685-9432 276 important in learning science. inquiry-based tasks will facilitate students to become active agents in building knowledge through constructing their own understanding and through meaning-making, which requires them to have an inquiry mindset. similarly, kuhlthau et al. [9] argue that inquiry is a way of learning new skills and broadening our knowledge for understanding and creating in the midst of rapid technological change. inquiry is the foundation of the information age school [9]. in the teacher preparation context, inquiry-based instruction can also improve the critical thinking and inquiry skills of students in pre-service teacher education [10, 11] • the importance of conducting experiments in the laboratory or in the outside classroom by focusing not only on the results but also the process of inquiry. the model that needs to be developed is reflective inquiry there is an element of reflection in each step of the experiment to; 1) recognize the extent to which students have confidence in carrying out the experiment correctly, 2) recognize the extent to which students need to learn/practice in performing steps in the inquiry, and 3) formulate an improvement action plan. this is in line with the research result that the use of the reflective worksheets showed that inquirybased learning activities promoted students’ scientific process skills such as defining the problem, formulating a hypothesis, and observing and interpreting results during the inquiry-based learning process. students also improved in terms of ability such as using scientific terms, drawing scientific and comprehensible figures, and making scientific explanations. in addition to these, it was found that students had more positive opinions about the learning process [12]. to ensure the achievement of the objectives of science education, a planned instructional organization is needed, so that it is guaranteed that students gain hands-on experience, and opportunities for conceptualization, and are trained to use science process skills. facilitating students’ learning is a crucial factor. inquiry-based approaches encourage science teachers to become a facilitative role. proper teacher guidance will allow students to internalize inquiry skills in every step of the investigation. this is in line with the findings of the study kuhlthau et.al. [9] that international journal of applied sciences and smart technologies volume 4, issue 2, pages 267-280 p-issn 2655-8564, e-issn 2685-9432 277 students need considerable guidance and intervention throughout the process to enable a depth of learning and personal understanding. without guidance, students often approach the process as a simple collecting and presenting assignment that leads to copying and pasting with little real learning. with the teacher’s guidance, students are able to concentrate on constructing new knowledge and learning useful strategies in each stage of the inquiry process. one of the strategies that focus on the process of acquiring inquiry skills is guided inquiry. with this model of inquiry, the teacher provides essential intervention at critical points in the inquiry process that fosters deep personal learning and transferable skills [9]. in inquiry learning, teachers need to restructure their learning environment so that students’ beliefs about science, scientists, and themselves will lead to positive attitudes. 4 conclusion as the objective of this study was to examine students’ competence and confidence in scientific inquiry, the results of data analysis show that mean of the rasch score for students’ competence (1.76 logits; sd = 1.20) is higher than the mean of the rasch score for students’ confidence (1. 41 logits; sd = 1.01). this indicates that although students in pre-service science education feel competent to do inquiry, they do not yet fully have enough confidence to carry out inquiry learning with students in classroom activities. for the development of competence and confidence of pre-service science teachers in scientific inquiry, science activities in the classroom need to be directed towards the inquiry model with an emphasis not only on the results of the investigation but also on proper guidance during the inquiry process. international journal of applied sciences and smart technologies volume 4, issue 2, pages 267-280 p-issn 2655-8564, e-issn 2685-9432 278 references [1] r. k. alwardt, “investigating the transition process when moving from a spiral curriculum alignment into a field-focus science curriculum alignment in middle school”. lindenwood university, 2011. [2] r. rohandi and a. n. zain, "incorporating indonesian students'" funds of knowledge" into teaching science to sustain their interest in science," bulgarian journal of science & education policy, 5(2), 2011. [3] c. c. selby, "what makes it science," journal of college science teaching, 35(7), 8-11, 2006. [4] s. abell, g. anderson, and j. chezem, "science as argument and explanation: exploring concepts of sound in third grade," inquiry into inquiry learning and teaching in science, 100-119, 2000. [5] j. hinrichsen, d. jarrett, and k. peixotto, "science inquiry for the classroom: a literature review," programme report. oregon: the northway regional educational laboratory, 1999. [6] h.-p. chang, c.-c. chen, g.-j. guo, y.-j. cheng, c.-y. lin, and t.-h. jen, "the development of a competence scale for learning science: inquiry and communication," international journal of science and mathematics education, 9(5), 1213-1233, 2011. [7] j. linacre, "a user’s guide to winsteps-ministep: rasch-model computer programs. program manual 3.68. 0," chicago, il, 2009. [8] s. k. w. chu, r. b. reynolds, n. j. tavares, m. notari, and c. w. y. lee, 21st century skills development through inquiry-based learning from theory to practice. springer, 2021. [9] c. c. kuhlthau, l. k. maniotes, and a. k. caspari, guided inquiry: learning in the 21st century: learning in the 21st century. abc-clio, 2015. [10] irwanto, a. d. saputro, e. rohaeti, and a. k. prodjosantoso, "using inquirybased laboratory instruction to improve critical thinking and scientific process skills among preservice elementary teachers," eurasian journal of educational research,19(80), 151-170, 2019. international journal of applied sciences and smart technologies volume 4, issue 2, pages 267-280 p-issn 2655-8564, e-issn 2685-9432 279 [11] a. stavrianoudaki and a. smyrnaios, "effects of inquiry-based learning cooperative strategies on pupils’ historical thinking and co-creation," in education beyond crisis: brill, 300-315, 2020. [12] a. mutlu, "evaluation of students’ scientific process skills through reflective worksheets in the inquiry-based learning environments," reflective practice, 21(2), 271-286, 2020. international journal of applied sciences and smart technologies volume 4, issue 2, pages 267-280 p-issn 2655-8564, e-issn 2685-9432 280 appendix: a. students’ competencies in scientific inquiry: 1. be able to pose questions according to data observed 2. be able to pose an explorable question 3. be able to describe a concept with an operational definition 4. be able to pose a verifiable hypothesis according to data 5. be able to pose a feasible explorative plan according to the question 6. be able to manipulate variables related to plan 7. be able to control extraneous variables that may interfere with results 8. be able to experiment according to a predefined plan 9. be able to collect data through different methods 10. be able to record data through different instruments 11. be able to compare and classify data collected from an experiment 12. be able to describe and interpret data through scientific terminology 13. be able to build a conclusion according to collected data 14. be able to infer according to collected data b. students’ confidence in inquiry teaching: 1. in science class, i could ask questions about what i don’t understand through observation. 2. when learning science, i could collect information related to questions to obtain a deeper understanding. 3. when learning science, i could deduce possible answers to the questions. 4. in science class, i could describe what data should be collected in the experiment. 5. in science class, i could choose suitable study methods based on the question. 6. in science class, i could consider possible factors that may influence the experiment. 7. in science class, i could design the experimental steps based on the question. 8. in science class, i could observe and record the results of the experiment carefully. 9. in science class, i could operate the experimental apparatus to measure data. 10. in science class, i could carry out the experiment in accordance with the experiment’s procedures. 11. in science class, i could compare or classify data collected in the experiment. 12. in science class, i could use scientific terms learned to explain the meaning of experimental data. 13. in science class, i could draw conclusions based on the mathematical relationship among experimental data. 14. in science class, i could explain experimental results or phenomena based on the experiment’s conclusion. international journal of applied sciences and smart technologies international journal of applied sciences and smart technologies volume 1, issue 1, pages 65–82 issn 2655-8564 65 factors influencing the difficulty level of the subject: machine learning technique approaches hari suparwito department of informatics, faculty of science and technology, sanata dharma university, yogyakarta, indonesia corresponding author: shirsj@jesuits.net (received 07-05-2019; revised 21-05-2019; accepted 21-05-2019) abstract the difficulty level of a subject is needed either to understand the student acceptance of the subject and the highest level of student achievement in it. some factors are considered, what kind of instructions, the readiness of the instructor and students in teaching and learning, evaluation and monitoring systems, and student expectations. many factors are involved, and educators should know this. it is better if they can discern which are the prime factors and which the secondary factors. the purpose of the study is to find out the determinant factors in establishing the difficulty level of the subject from the students’, teachers’ and infrastructure point of view using three machine learning techniques. the mse and the variable importance measurement were used to predict between some factors such as attendance, instructors, and other factors as independent variables and the difficulty level of the subject as a dependent variable. the study result showed that gradient boosting machine obtained the mse value result 1.14 and 1.30 for training and validation dataset. the model generated five variable importance as an independent factor, i.e. attendance, instructor, the course can give a new perspective to students, the quizzes, assignments, projects and exams international journal of applied sciences and smart technologies volume 1, issue 1, pages 65–82 issn 2655-8564 66 contributed to helping the learning, and the instructor was committed to the course and was understandable. the gradient boosting machine is superior to other methods with the lowest mse and mae values results. two methods, gradient boosting machine and deep learning, have produced the same five main factors that influenced the difficulty of the subject. it means these factors are significant and should get intention by the stakeholders keywords: machine learning, regression, deep learning, random forest, gradient boosting machine, data mining, education. 1 introduction education provides people with knowledge about life and the world. it helps build character and leads to illumination. given the importance of education, researchers ask themselves what factors influence the process of teaching and the attitude of students so that the students can understand the subjects, and what factors help to measure the difficulty level of subjects. the difficulty level of subjects is needed both to understand either the student acceptance of their subject or to ascertain the highest level of the student achievement in them [1] john d. et al. [2] have examined some aspects and conducted some reviews based on learning conditions, student characteristics, materials and criterion tasks for effective learning techniques. another group of researchers [3] have found that the social context influenced effective teaching and learning. some factors mentioned were direct instruction, frequent monitoring, sense of communities, and student expectations. there are many factors involve here. research on education using data mining are increasing and promising in the last years and mostly focusing the research on student’s performance, the effectiveness of learning and students and teacher’s perception of learning [4]. romero et al. stated that the objective using data mining in education areas is to improve the learning itself and the actors are students and teachers with the subjects of learning and the way to deliver as a medium relates them. vanthienen and de witte [5] revealed that their study international journal of applied sciences and smart technologies volume 1, issue 1, pages 65–82 issn 2655-8564 67 showed the use of machine learning methods is advantageous especially when it faces a nonlinear interaction function such as the role of a school principal to accommodate the district size policies. another research in education field using the machine learning technique was undertaken by liao, zingaro [6]. they stated that using machine learning techniques; they can identify students who are at risk of performing poorly in a course. moreover, the machine learning approach was also performed for evaluating and predicting the student’s level of proficiency [7]. to successfully predict the quality of this type of educational process the authors use one of the machine learning techniques. they claimed that the proposed technique could be effectively used in the educational management when the online teaching strategy should be selected based on student’s goals, individual features, needs and preferences. finally, cope and kalantzis [8] claimed that the use of machine learning and big data analysis in research on education should be undertaken because these emerging sources of evidence of learning have significant implications for the relationships between assessment and instruction. moreover, for educational researchers, these datasets are in some senses different from conventional evidentiary sources, and this raises a new approach and give a different point of view to the traditional research in education areas. the objective of this research is to find out the determinant factors that affect the student’s acceptance focusing on the difficulty level of students understanding of the subjects. instead of using a statistical approach in this present study we performed three machine learning techniques, i.e. deep learning, random forest, and gradient boosting machine. another purpose of this research is to introduce and compare the results of three machine learning methods in education areas. as the data set, we collected the dataset from the student evaluation at gazi university ankara [9] and was taken from the uci repository dataset. this data set will be examined by three machine learning techniques using h2o platforms. this paper is organised as follows. in section 2, we describe the research methodology with the following process in data mining approaches and then the results based on the h2o data mining tools calculation are presented and discussed in section 3. in chapter 4, we provide the conclusion and the subsequent work research outcome. international journal of applied sciences and smart technologies volume 1, issue 1, pages 65–82 issn 2655-8564 68 2 research methodology in general, the steps in this study follows the model of data mining techniques [10]: 2.1 objective determination the first step was to discover the real-world problems. this study will attempt to answer the educational question of how to understand and measure the difficulty level of the subject from the students’, teachers’ and infrastructures’ point of view. to be more precise, the following research question was raised: what is the determinant factors which make students think and establish that this subject is difficult or easy? a hypothesis was created to test which attributes in the data set gives a significant contribution toward the research question: students think that the level of the subject difficulty is more likely to be influenced by the subject syllabus, activities and interactions between students and instructors and the readiness of students and teachers to engage in the learning process. by analysing and testing this hypothesis, it shall know the determinant factors to answer the question of why do the students think that the subject is difficult to understand? moreover, what should be done by the teachers so that the students can accept and understand the subject materials more easily? 2.2 the proposed work to examine three machine learning models we selected the dataset from the uci machine learning repository about turkiye students evaluation data set [9]. furthermore, the dataset was analysed for reducing its dimensional features by using principle component analysis (pca) and then followed by performing a data normalisation using z-normalization. moreover, the dataset was randomly split into ratio 80% ∶ 20% from the data population as training and validation dataset. three machine learning techniques then were applied to training dataset obtaining the regression model, the mse and mae value results and the variable importance of each method. using the model, we observed validation dataset to find out the mse and international journal of applied sciences and smart technologies volume 1, issue 1, pages 65–82 issn 2655-8564 69 mae values results and the variable significance for the testing dataset. all processes can be seen in the diagram below. figure 1. the proposed work. it was started by selecting the dataset to deliver the mse and mae value results and the variable importance rank 2.3 data pre-processing the research used the data result of the student questionnaire at gazi university ankara turkey [9]. the dataset was obtained from the uci machine learning repository dataset (https://archive.ics.uci.edu/ml/index.php). there are 5820 instances in the data set with 33 attributes where 28 attributes are formed in a likert-type scale with the value from 1 to 5. the likert-type scale values 1 equals to a strongly disagree value, and the value 5 equals to a strongly agree value. the five other attributes are questions with the answers in the natural numbers data format. the questions can be grouped into international journal of applied sciences and smart technologies volume 1, issue 1, pages 65–82 issn 2655-8564 70 three substantial group questions based on students’, teachers’ and infrastructures’ point of view. next, we undertook a pca analysis for features reduction. matrix correlation from the pca analysis showed each eigenvalue of the features. a new variable (principal component) was calculated based on eigenvalues with the values bigger than one. the pca analysis result for a new variable is five principal components. we analysed and found that five principles components can be grouped into attendance, instructor, subject preparation, quizzes or exams, and the relationship between students and instructors. table 1. table of principle component component standard deviation proportion of variance cumulative of variance pc1 6.140 0.588 0.588 pc2 3.686 0.212 0.800 pc3 1.701 0.045 0.845 pc4 1.411 0.031 0.876 pc5 1.059 0.017 0.894 figure 2. the cumulative proportion of variance versus principle component. from five principal components, we selected which features have a high rank based on the eigenvector values of each feature. finally, we found 15 features that can be used international journal of applied sciences and smart technologies volume 1, issue 1, pages 65–82 issn 2655-8564 71 in this study. therefore, the number of features was reduced from 33 features to 15 features only. a new reduced feature is shown in the following table. table 2. pca analysis results features name of features difficulty (target label) attendance instructors q1 the semester course content, teaching method and evaluation system were provided at the start q4 the course was taught according to the syllabus announced on the first day of class. q5 the class discussions, homework assignments, applications and studies were satisfactory. q7 the course allowed fieldwork, applications, laboratory, discussion and other studies. q8 the quizzes, assignments, projects and exams contributed to help the learning. q12 the course helped me look at life and the world with a new perspective. q16 the instructor was committed to the course and was understandable. q21 the instructor demonstrated a positive approach to students. q22 the instructor was open and respectful of the views of students about the course q24 the instructor gave relevant homework assignments/projects and helped/guided students. q25 the instructor responded to questions about the course inside and outside of the course. q27 the instructor provided solutions to exams and discussed them with students. q28 the instructor treated all students in a right and objective manner. 2.4 data mining the next process after the data pre-processing was to decide the kind of evaluation to be applied to the data set. the regression task is chosen because the data set is already classified in attributes and the questionnaire’s answer is on a likert-type scale from 1 to 5 means already classified too. another reason is that this study’ goal is directed to discover which attributes are the determinant factors of the difficulty level of the subject. three machine learning techniques that are deep learning (dl), random forest (rf) and gradient boosting machine (gbm) were used to examine the data set focusing on the regression analysis between 15 attributes as an independent variable and the difficulty level of the subject as a target or dependent variable. 2.4.1. deep learning introduced the first time by hinton et al. dl becomes more and more popular as one method to solve the problems in machine learning areas [11]. deep learning is a part of machine learning techniques that aim to imitate the work of the human brain using an international journal of applied sciences and smart technologies volume 1, issue 1, pages 65–82 issn 2655-8564 72 artificial neural network. different from other machine learning programs, the deep learning algorithm is made by a complex and high capability to learn, work and classify data. in general, dl consist of 3 main layers: input-hidden-output. input layers work for containing raw data as input data. hidden layers are applied for observing, learning and classifying data based on the references, in case of dl hidden layers usually consist of more than three layers. output layers present the results. figure 3. deep learning diagram (the picture was taken from https://www.kdnuggets.com/2017/05/deep-learning-big-deal.html). 2.4.2. random forest random forest is an ensemble learning technique for classification [12]. rf works by constructing a collection of decision tree at training time and returning the class that is the mode of all of the classes of the individual trees. like dl, the rf algorithm has a significant advantage when analysing many of the datasets. it can address highdimensional data with an excellent ability to learn from a large amount of data, and it can realise learning regression and classification for nonlinear sample data. international journal of applied sciences and smart technologies volume 1, issue 1, pages 65–82 issn 2655-8564 73 figure 4. random forest architecture for classification and regression analysis (picture was taken from https://www.researchgate.net/figure/architecture-of-the-random-forestmodel_fig1_301638643) 2.4.3. gradient boosting machine gradient boosting is a form of machine learning boosting. boosting means target outcomes for each case are set based on the gradient of the error to the prediction. the idea behind gbm is to set the target outcomes for the next model in order to minimise the error. each new model performs in the direction that minimises prediction error [13]. even though rf and gbm are an ensemble learning method, gbm and rf differ in the way the trees are created: the order and the way the results are combined. gbm tries to add new trees that compliment the already built ones. this usually gives a better accuracy with fewer trees. therefore, gbm performs better than rf if parameters tuned carefully [14]. 2.4.4 cross-validation the goal of cross-validation is to test the model's ability to predict new data and to give an insight into how the model will generalise to an independent dataset. in each machine learning model was undertaken the k-fold cross-validation (cv) method and it was applied to training and testing data set. the k-fold cv method was selected for the data sampling method because data instances should be evaluated in training and testing data set. the number of instances is quite large so when the k-fold cv does the data sampling to the training and testing data set k-fold cv can do quite well. this international journal of applied sciences and smart technologies volume 1, issue 1, pages 65–82 issn 2655-8564 74 experiment was repeated many times, in this case, the repeating times was expressed by the k values. even for some scientists argued that k=10 is the best value but in this research, the selection of the best k value in k-fold cv done by repeating many times experiment using various k values [15]. in this study, k-fold cv equal to 10 was applied. machine learning methods worked by using some parameters and finding the best result, each machine learning method has specific parameters to adjust. we used data grid analysis to find the best parameters to provide the optimum results. the following table shows the grid search parameters applied for table 3. grid parameters values model model grid parameter values dl function – rectifier; tanh hidden layers – 200, 200, 100, 50; 100,100,50; 50, 100, 100, 50 epochs – 50; 100; 200 cv – 5; 10 rf ntrees – 50; 100; 200 epochs – 50; 100; 200 cv – 5; 10 gbm ntrees – 50; 100; 200 epochs – 50; 100; 200 cv – 5; 10 the best performance from each model showed by the following parameters table 4. parameters values model model parameter values dl function – rectifier hidden layers – 200, 200, 100, 50 epochs – 200 cv – 10 input dropout – 0.2 rf ntrees – 200 epochs – 100 cv – 10 gbm ntrees – 50 epochs – 50 cv – 10 tabel 4 shows the best parameters gave by the grid search analysis. international journal of applied sciences and smart technologies volume 1, issue 1, pages 65–82 issn 2655-8564 75 3 results and discussions three machine learning methods were used to examine the dataset. the results obtained were the mse and mae values of each method and the variable importance. the mean squared error (mse) value was used to find the difference between the estimator and what is estimated. the mse is achieved by applying the following formula: (1) where �̌� is a vector of 𝑛 prediction and 𝑌 is the vector of observed values corresponding to the input to the function that created the predictions. 𝑌𝑖 is the i-th value of the vector. in this study, the training dataset was the data obtained from 80% number of data population, while the dataset from the rest of the number of populations (20%) was used as a testing dataset. h2o machine learning tools were performed for training and testing dataset, and the mse value results are presented in the following table. table 5. mse and mae values of three machine learning models models training data set validation data set mse mae mse mae dl 1.25 0.89 1.33 0.92 rf 1.31 0.92 1.38 0.91 gbm 1.14 0.84 1.30 0.90 the lowest mse values are the best result because it describes the similarity between the real values and the prediction values. in other words, the lower the mse, the higher the accuracy of prediction as there would be an excellent match between the actual and predicted data set. in this study, the lowest mse value is obtained by gbm models. like the mse value, the mae value obtained by the formula (2) where 𝑥 and 𝑦 values are observed and predicted values. the lower mae value also indicates better performance of the models. international journal of applied sciences and smart technologies volume 1, issue 1, pages 65–82 issn 2655-8564 76 understanding the best model for the prediction can be performed by using deviance of training and testing dataset [16]. deviance measurement is used for measure how well the model to predict it attempt is a generalisation of the idea of using the sum of squares of residuals in ordinary least square to cases where model-fitting is obtained by maximum likelihood. the following picture shows the deviance score for each number of trees in gbm. figure 5. gbm deviance score for each number of trees. we show the gbm model result only because gbm method obtained the best result 3.1. variable importance wei, lu [17] stated that it is essential to know which the more significant factor or variable in the regression or prediction analysis. whereas grömping [18] argued that predictive analysis would be more convincing when the most influential predictor variable obtained, though the way to find variable importance is challenging and some regression models are not directly planned to find the variable importance. therefore, another method needs to be used to find the variable importance. some techniques in machine learning could be used as an alternative way to find the variable importance, especially when dealing with high-dimensional input data and the categorical output. which variables are more significant in predicting the difficulty of the subject? three ml methods were applied in this study. the percentage of mean square error (mse) international journal of applied sciences and smart technologies volume 1, issue 1, pages 65–82 issn 2655-8564 77 and mean absolute error (mae) was measured, which indicates which variable has a more significant influence compared with other variables in predicting the difficulty of the subject values. table 6 shows the rank of the variable importance results and it also is given for example the graph of the variable importance from the gbm result in fig. 6 table 6. variable importance results of each models models variable importance dl 1. attendance 2. instructure 3. q12 the course helped me look at life and the world with a new perspective. 4. q16 the instructor was committed to the course and was understandable 5. q8 the quizzes, assignments, projects and exams contributed to help the learning. rf 1. attendance 2. q22 the instructor was open and respectful of the views of students about the course. 3. q25 the instructor responded to questions about the course inside and outside of the course. 4. q21 the instructor demonstrated a positive approach to students. 5. instructure gbm 1. attendance 2. instructure 3. q12 the course helped me look at life and the world with a new perspective. 4. q8 the quizzes, assignments, projects and exams contributed to help the learning. 5. q16 the instructor was committed to the course and was understandable. dl and gbm models have the same variable importance even though for q8, q12 and q16 have a different rank. however, the main five factors are the same that was produced by dl and gbm analysis. for three machine learning models, two main factors are attendance and instructors have a significant influence in determining the difficulty level of the subject. it means these two factors are the most important predictor for the difficulty of the subject variable. the previous study also revealed that student’s performance was not only dependent on their academic effort but also some other aspect that has a similar influence as well [19]. international journal of applied sciences and smart technologies volume 1, issue 1, pages 65–82 issn 2655-8564 78 figure 6. gbm variable importance to answer the main question in the first section, now we can see the rank of the variable importance, especially from dl and gbm results. moreover, if we observe which features have a significant influence, we can draw some points here, a) attendance has the most significant impact. the respondent thought that attendance whether by students or by instructors have an important role and it can make their presumption about the subjects. attendance means participation and involvement between students and instructors. b) instructors and their attitudes or approach to the students are related to the subjects. the students are convinced that the instructors have a significant impact on delivering the subjects to them whether it was easy or difficult to be understood by them. this aspect is also related to the instructors’ attitude such as how the instructor was committed to the course, how they respond if students are asking the subject in or out classes, how they can encourage the students to do the best with the selected subjects. the previous study by martin, wang [20] stated that instructors become an essential factor to make the subjects were easy or difficult in front of their students. c) the course can give a new perspective to students. a new perspective could be driven by the students. therefore, they would focus on learning the subject and the next it will make the subject was easy to learn. in other words, giving a new international journal of applied sciences and smart technologies volume 1, issue 1, pages 65–82 issn 2655-8564 79 perspective for life become a stimulus to the students to learn and love the subjects. d) the quizzes, assignments, projects and exams contributed to help the learning. the students need the way to express their ability in understanding the subjects. the students felt that reading some theories were not enough, they needed some exercises, and by doing the exercises, they can understand the subject more. these aspects were also mentioned by henderson and harper [21] in their research. they revealed that some correction, assessment, and teacher’s feedback on student’s quizzes could help the students to prepare their exams better. 4 conclusions three machine learning algorithms, i.e. deep learning, random forest, and gradient boosting machine with k-folds cv data sampling methods have been applied to analyse the difficulty level of the subject based on students’, teachers’ and infrastructures’ point of view. the data set is collected from the student questionnaire result at gazi university ankara. the result revealed that there are five determinant factors, i.e. attendance, instructors, the course helped me look at life and the world with a new perspective, the quizzes, assignments, projects and exams contributed to helping the learning, and the instructor was committed to the course and was understandable. these five determinant factors can affect student’s and instructor’s perspective on the difficulty level of the subject. the two main factors are attendance and instructors. this study also demonstrated that data mining methods could be employed in the education field. however, the ability to understand data and how to work with them is very crucial. data mining processes are important especially step by step at the stage model of data mining can be used as guidance on how to work with the data mining to solve the real-world problems. in the subsequent study, it is possible to discover and compare these techniques with another algorithm in classification and regression tasks. another possibility is also to compare some other tools such as orange and rapidminer tools where these two tools work on machine learning algorithm for solving the same problem. international journal of applied sciences and smart technologies volume 1, issue 1, pages 65–82 issn 2655-8564 80 acknowledgements this research was supported by department of informatics engineering, sanata dharma university. we would also like to thank the anonymous reviewers; whose comments greatly improved the manuscript. references [1] m. t. tillery and a. fishbach, “how to measure motivation: a guide for experimental social psychologist,” social and personality psychology compass, 8 (7), 328−341, 2014. [2] j. dunlosky, k. a. rawson, e. j. marsh, m. j. nathan, and d. t. willingham, “improving students' learning with effective learning techniques: promising directions from cognitive and educational psychology,” psychological science in the public interest, 14 (1), 4−5, 2013. [3] p. hallinger and j. f. murphy, “the social context of effective schools,” american journal of education, 94 (3), 328–355. [4] c. romero and s. ventura, “data mining in education,” wiley interdisciplinary reviews: data mining and knowledge discovery, 3 (1), 12−27, 2013. [5] j. vanthienen and k. d. witte, “data analytics applications in education,” crc press taylor & francis group, 2017. [6] liao, s.n., et al., “a robust machine learning technique to predict low-performing students,” acm transactions on computing education (toce), 19 (3), 18, 2019. [7] n. kushik, n. yevtushenko, and t. evtushenko, “novel machine learning technique for predicting teaching strategy effectiveness,” international journal of information management, (2016). https://doi.org/10.1016/j.ijinfomgt.2016.02.006 [8] b. cope and m. kalantzis, “big data comes to school: implications for learning, assessment, and research,” aera open, 2 (2), 1–19, 2016. [9] g. gunduza and e. fokoue, turkiye student evaluation i, university of california, school of information and computer sciences, 2013. international journal of applied sciences and smart technologies volume 1, issue 1, pages 65–82 issn 2655-8564 81 [10] p. cabena, p. hadjinian, r. stadler, j. verhees, and a. zanasi, “discovering data mining: from concept to implementation,” englewood cliffs, n. j. prentice hall, 1998. [11] y. lecun, y. bengio, and g. hinton, “deep learning,” nature, 521 (7553), 436−444, 2015. [12] a. liaw and m. wiener, “classification and regression by randomforest,” r news, 2 (3),18−22, 2002. [13] j. h. friedman, “greedy function approximation: a gradient boosting machine,” annals of statistics, 29 (5), 1189−1232, 2001. [14] r. e. schapire, “the boosting approach to machine learning: an overview, in nonlinear estimation and classification,” springer, 149−171, 2003. [15] r, kohavi and f. provost, “confusion matrix,” machine learning, 30 (2-3), 271−274, 1998. [16] g. ritschard, “computing and using the deviance with classification trees,” compstat 2006-proceedings in computational statistics, 55−66, august 2006. [17] p. wei, z. lu, and j. song, “variable importance analysis: a comprehensive review,” reliability engineering & system safety, 142, 399−432, 2015. [18] u. grömping, “variable importance in regression models,” wiley interdisciplinary reviews: computational statistics, 7 (2), 137−152, 2015. [19] a. a. saa, “educational data mining & students’ performance prediction,” international journal of advanced computer science and applications, 7 (5), 212−220, 2016. [20] f. martin, c. wang, and a. sadaf, “student perception of helpfulness of facilitation strategies that enhance instructor presence, connectedness, engagement and learning in online courses,” the internet and higher education, 37, 52−65, 2018. [21] c. henderson and k. a. harper, “quiz corrections: improving learning by encouraging students to reflect on their mistakes,” the physics teacher, 47 (9), 581−586, 2009. international journal of applied sciences and smart technologies volume 1, issue 1, pages 65–82 issn 2655-8564 82 this page intentionally left blank international journal of applied sciences and smart technologies volume 5, issue 1, pages 39-54 p-issn 2655-8564, e-issn 2685-9432 this work is licensed under a creative commons attribution 4.0 international license 39 performance of low power electric energy clothes dryers for households doddy purwadianto1* , budi sugiharto1 department of mechanical engineering, faculty of science and technology, sanata dharma university, yogyakarta, indonesia *corresponding author: purwadodi@gmail.com (received 24-04-2023; revised 27-04-2023; accepted 01-05-2023) abstract one of the problems faced by middle to lower economic class people in big cities is drying clothes, both in the dry season and in the rainy season. artificial clothes drying equipment is needed that can replace the role of solar energy. the purpose of this study was to determine the performance of electric energy clothes dryers and the length of time needed to dry clothes on a household scale. the clothes dryer works by using a heat-pump based on the vapor-compression cycle. the total power of the heat pump is 200 watt. the working fluid of the heat-pump is r134a. the working fluid used to dry clothes is air, using a closed airflow system. the research was carried out experimentally by varying the amount of clothes in the drying chamber. the clothes dryer performance (cop) of the heat pump is 7.94. drying time for 15, 25 and 40 clothes respectively for 210 minutes, 330 minutes and 450 minutes. keywords: dryer, performance, clothes, heat pump, vapor compression cycle 1. introduction the process of drying clothes is the process of eliminating water in clothes. the process of drying clothes is done after the clothes are washed and the water is squeezed out. squeezing clothes can be done by hand or by using a washing machine, then clothes can be dried by the evaporation process. the process of evaporation of water on clothes can be done in various ways, such as: aerating, drying in the sun, passing hot air, passing hot and dry air. after the clothes are dry, the clothes are ironed, in addition to smoothing the clothes, it is also to kill the germs that are on the clothes. http://creativecommons.org/licenses/by/4.0/ international journal of applied sciences and smart technologies volume 5, issue 1, pages 39-54 p-issn 2655-8564, e-issn 2685-9432 40 once tidied up, the clothes are stored in the wardrobe. when the clothes will be worn, the clothes are taken and put on. the main process in drying clothes is the evaporation of water contained in clothes. the evaporation process is the process of changing the water phase from the liquid phase to the water vapor phase. during the evaporation process, water is transferred from the clothes to the air. the evaporation process requires heat. the heat of vaporization is taken from the air. the air temperature will decrease and the water content in the air will increase. the specific humidity of the air increases. this process is known as the cooling and humidifying process. the drying process is influenced by several air conditions, such as: air temperature, air humidity and air flow [1] drying by drying is natural drying. drying with the help of drying equipment, called artificial drying. combined drying is drying with solar-energy and other-energy drying equipment. energy sources for drying equipment can come from waste fuel energy or biomass (husk, straw, corn cobs, sawdust, and coconut-fiber), lpg, cng, or electricity. research with the help of drying equipment has been carried out by several researchers. saptariana, et al dry the traditional rengginang food made from sticky rice using an lpg drying oven [2]. doddy purwadianto and petrus kanisius purwadi dried corn chips using an electric energy drying oven [3]. petrus kanisius purwadi drying clothes using an electric energy drying oven [4]. wibowo kusbandono, petrus kanisius purwadi and a. prasetyadi drying wooden planks in an electric energy drying oven [5,6]. adhi prasnowo dries potato chips using an electric energy oven [7]. tri mulyanto and supriyono dry the chili using an electric oven and solar energy [8]. dian morfi nasution et al, sari farah dina, et al, dry cocoa beans using solar energy and electrical energy [9,10]. efriwandy simbolon et al dried cloves in an oven burning coconut waste [11]. disadvantages of the process of drying clothes by drying in the sun are: long and depending on the weather. drying with an lpg-fired oven can be done at any time (morning, afternoon and evening) and does not depend on the weather (can be done during the dry season, or during the rainy season). drying with lpg ovens can be done indoors and can dry quickly. disadvantages of drying with an oven: impractical or complicated, less safe because it has the potential for fires, wastes energy, is not international journal of applied sciences and smart technologies volume 5, issue 1, pages 39-54 p-issn 2655-8564, e-issn 2685-9432 41 environmentally friendly because it creates exhaust gas pollution, and the environment is not clean. in big cities, many middle-class people find it difficult to dry clothes naturally, because they do not have a large outdoor yard in their house. the solution that can be given is to make artificial clothes drying equipment using a heat pump that has low electric power. the heat pump works on the basis of the vapor-compression cycle. with a heat pump, you can get the dry and hot air needed to dry clothes. no electric heating elements are used in the drying process. electricity is only used to drive the compressor and fan. air drying is carried out by the evaporator while air heating is carried out by the condenser. the evaporator and condenser are the main components of the heat pump, apart from the compressor and capillary tube. the drying time is not a problem if the dryer is only for household use (not for doing business). with the use of a heat pump, the process of drying clothes can be abandoned. safe, because the potential for the heat pump to pose a minor hazard. clothes drying can be done indoors, and can be done at any time, regardless of the weather, safe and comfortable. research on drying clothes involving heat pumps has been carried out by: pradeep bansal, amar mohabir, william miller [12], ward tegrotenhuis, andrew butterfield, dustin caldwell, alexander crook, austin winkelman [13]. in addition to being able to be used at any time, the use of a heat pump produces high performance, is practical and environmentally friendly. another study using heat pumps to dry clothes was carried out by purwadi, pk using 800 watts of electric power, as well as that carried out by cakra, et al, using 1 pk of electric power [14]. research conducted by gordon, et al, uses a lower electric power of 0.5 pk [15]. for people of middle economic class, a clothes dryer that is suitable for household use is a dryer that uses a heat-pump with low electric power. in this study, the energy needed to drive the heat pump is 200 watt. a. heat pump the clothes dryer used in this study uses a heat pump based on a vapor compression cycle. the working fluid in the heat pump is called refrigerant or freon. requirements for the use of refrigerants must be safe and environmentally friendly so international journal of applied sciences and smart technologies volume 5, issue 1, pages 39-54 p-issn 2655-8564, e-issn 2685-9432 42 as not to damage the ozone layer. the vapor compression cycle is usually used in refrigeration engines [16]. the ideal vapor compression cycle is composed of several main processes: the compression process, the desuperheating process, the condensation process, the pressure reduction process and the evaporation process. to improve engine performance, additional processes can be added to the standard vapor compression cycle, namely superheating and subcooling processes. the heat pump has main components: compressor, condenser, capillary and evaporator. the compressor functions for the compression process, the condenser for desuperheating and refrigerant condensation processes, the pipe-capillary functions to reduce pressure and the evaporator functions for the evaporation process. fig. 1 presents a series of main components of a heat pump. fig. 2 presents the vapor-compression cycle on the p-h diagram. in the condenser, a process of releasing heat occurs when the refrigerant undergoes a process of desuperheating and condensation, and in the evaporator, absorption of heat from the environment occurs when the refrigerant undergoes a boiling process. in the heat pump there is an energy balance. the energy entered into the heat-pump (which is absorbed by the evaporator) plus the energy used to drive the heat-pump (which is used to drive the compressor) equals the energy released by the heat-pump (energy released by the condenser to the environment). when the vapor compression cycle machine is running, refrigerant successively flows from the compressor, condenser, capillary-pipe, evaporator, and then flows to the compressor again. as long as there is electricity, the vapor-compression cycle in the heat-pump continues continuously. the function of the compressor is to increase the refrigerant pressure at a fixed entropy value (isentropy), from low pressure p1 to high pressure p2. the work done by the compressor per unit mass of refrigerant is expressed by win. in addition to experiencing an increase in pressure, the refrigerant temperature increases so that the gas condition changes from saturated gas to superheated gas. exiting the compressor, refrigerant enters the condenser. the condenser's job is to dissipate heat into the surrounding air. there are two processes that refrigerants undergo, the desuperheating process and the condensation process. when removing heat in the desuperheating process, the refrigerant temperature international journal of applied sciences and smart technologies volume 5, issue 1, pages 39-54 p-issn 2655-8564, e-issn 2685-9432 43 decreases. the condition of the refrigerant gas changes from superheated gas to saturated gas at high pressure p2. the process continues with the condensation process. the condensation process in the condenser takes place at constant pressure and constant temperature. the refrigerant phase changes from a saturated gas phase to a saturated liquid. if desired, the refrigerant phase can be made in the superfluid state. in this case, a sub-cooling process is required. the amount of heat that is discharged by the condenser into the ambient air is expressed by qout. exiting the condenser, the refrigerant flows into the capillary-pipe. before flowing into the capillary-pipe, the refrigerant flows through the filter, to undergo a process of filtering impurities. when the refrigerant flows into the capillary tube, the refrigerant pressure drops until it reaches low pressure p2, and the refrigerant temperature also drops. this process takes place at a constant refrigerant enthalpy. the pressure drop in the refrigerant is caused by friction between the inner surface of the capillary-pipe and the flowing refrigerant. because the diameter of the capillary tube is quite small, the pressure drop that occurs is quite large. some of the refrigerant changes phase to vapor. the refrigerant then enters the evaporator, and the refrigerant absorbs heat from the surrounding air to be used to change the refrigerant phase from a liquid and gas mixture to a saturated gas. figure 1. series of main components in a heat pump description in fig. 1: 1 compressor 3 capillary pipe 2 evaporator 4 condenser 5 fan international journal of applied sciences and smart technologies volume 5, issue 1, pages 39-54 p-issn 2655-8564, e-issn 2685-9432 44 fig. 1 besides presenting a series of main components in a heat pump, also presents the direction of air flow when passing through the heat pump. the air entering the heat pump is first passed through the evaporator and lastly through the condenser. after passing through the condenser, the air is removed from the heat pump. the air released by the heat pump is then fed into the clothes drying room. the air entering the evaporator is air that has dried the clothes in the drying chamber. figure 2. vapor compression cycle on a mollier diagram, without superheating and subcooling the ratio of useful energy to the energy required to drive a vapor-compression cycle engine is called performance. in this case, the useful energies are qin and qout. the performance or coefficient of performance or cop of the clothes drying oven is calculated using equation (1). 𝐶𝑂𝑃 = (𝑄𝑖𝑛+𝑄𝑜𝑢𝑡) 𝑊𝑖𝑛 . (1) in equation (1), 𝑄𝑖𝑛 = ℎ1 − ℎ4 (2) 𝑄𝑜𝑢𝑡 = ℎ2 − ℎ3 (3) 𝑊𝑖𝑛 = ℎ2 − ℎ1 (4) in equations (2), (3) and (4), h1 represents the enthalpy of refrigerant before it enters the compressor, h2 represents the enthalpy of refrigerant when it leaves the compressor, international journal of applied sciences and smart technologies volume 5, issue 1, pages 39-54 p-issn 2655-8564, e-issn 2685-9432 45 h3 is the enthalpy of refrigerant when it enters the capillary-pipe and h4 is the enthalpy of refrigerant when it enters the evaporator. b. clothes dryer fig. 3 presents a schematic of a clothes dryer using a heat-pump. the refrigerant used in the heat pump is r-134a. the fluid used to dry clothes is dry and hot enough. the air flow used in the drying process is a closed air cycle. no airflow is introduced into the drying chamber or removed from the clothes drying chamber. to obtain dry and hot enough air, air is circulated through a heat-pump by passing air through finned pipes from the evaporator and condenser components. air can flow, because of the fan inside the heat pump. dry and hot enough air then flows through the entire surface of the clothes which are hung on hangers in the clothes drying room. after the air has passed through the clothing, the air temperature will drop but the air humidity will increase. the air is then flowed repeatedly to the evaporator again. the air cycle is repeated until dry clothes are obtained with the desired degree of dryness. figure 3. schematic of a clothes dryer using a heat pump description in fig. 3. 1 heat pump 4 hanger 2 drying chamber 5 drying chamber wheels 3 clothes international journal of applied sciences and smart technologies volume 5, issue 1, pages 39-54 p-issn 2655-8564, e-issn 2685-9432 46 2. research methodology the research was conducted experimentally. at the time of the study, the heat pump was placed in the clothes drying chamber. the size of the drying chamber is 1.1 m long, 1.1 m wide and 1.15 m high. the drying chamber is made of wood with a thickness of 8 mm. in the drying chamber there are a number of clothes hanging on hangers. the arrangement of the clothes is shown in fig. 3. the total electric power used by the heat pump is 200 watts. the evaporator used in the heat pump is of the finned-pipe type, with the pipe material being made of copper and the fins of aluminum material. the condenser used is also of the finned pipe type, with pipes made of copper and fins made of aluminum. the capillaries are made of copper, having a diameter of 0.026 inches. the research was conducted by varying the number of clothes used in the drying room: (a) 15 clothes, (b) 25 clothes and (c) 40 clothes. during the drying process the clothes drying chamber is tightly closed. there is no inflow of air and no outflow of air. the clothes that are dried are in the form of t-shirts made of cotton, with size xl. 3. results and discussion fig. 4 presents the vapor compression cycle in the p-h diagram r-134a of a heat pump used in a clothes dryer. the delineation can be obtained using the absolute pressures p1 and p2. the absolute pressure p1 is the working pressure of the evaporator and the absolute pressure p2 is the working pressure of the condenser. the p1 and p2 values are based on the measuring working pressure plus the outside air pressure. a vapor compression cycle is described assuming no superheating and no subcooling. from the vapor compression cycle, it can be seen that the working temperature of the evaporator (tevap), the working temperature of the condenser (tkond), the enthalpy values h1, h2, h3, and h4 are presented in table 1. however, the enthalpy values h1, h3 and h4 are presented in table 1, taken from the table of the properties of refrigerant r134a in order to obtain accurate data. by knowing the enthalpy value, the qin, qout, win and cop of the heat pump can be calculated, the results of which are presented in table 2. the working temperature of the evaporator is the boiling temperature of international journal of applied sciences and smart technologies volume 5, issue 1, pages 39-54 p-issn 2655-8564, e-issn 2685-9432 47 the refrigerant in the evaporator at low pressure p1 and the working temperature of the condenser is the temperature of the refrigerant condensing in the condenser at high pressure p2. figure 4. heat pump vapor compression cycle in diagram p-h, r-134a table 1. research data for the characteristics of a vapor compression cycle based heat pump used in clothes dryers, with 15, 25 and 40 clothes researched machine p1 (bar) p2 (bar) tevap (oc) tkond (oc) h1 (kj/kg) h2 (kj/kg) h3 (kj/kg) h4 (kj/kg) heat pump 3,496 16,813 5 60 400,07 432,55 287,39 287,39 table 2. characteristics of heat pumps for drying 15, 25 and 40 clothes researched machine p1 (bar) p2 (bar) qin (kj/kg) qout (kj/kg) win (kj/kg) cop heat pump 3,496 16,813 112,68 145,16 32,48 7,94 table 3. air conditions in the drying process amount of clothes average air temperature before entering the evaporator, (oc) average air temperature out of the condenser (oc) tdb,a tdb,d 15 clothes 38,6 53,87 25 clothes 40 clothes international journal of applied sciences and smart technologies volume 5, issue 1, pages 39-54 p-issn 2655-8564, e-issn 2685-9432 48 table 4. length of time for drying clothes no amount of clothes initial total mass wet clothes (kg) final total mass dry clothes (kg) mass of wet clothes-mass of dry clothes (kg) drying time (menit) 1 15 3,94 2,55 3,94-2,55 = 1,39 210 2 25 6,57 3, 96 6,57-3,96 = 2,61 330 3 40 10,51 6,51 10,51-6,51= 4,00 450 figure 5. the total mass of clothes over time during the drying process. when drying clothes, air flows by repeated air-cycles in the dryer. the air cycle consists of 4 processes: air cooling (sensible cooling), air cooling process accompanied by condensation of water vapor (cooling and dehumidifying), air heating (sensible heating) and cooling accompanied by an increase in specific humidity (evaporative cooling). the cooling and cooling and dehumidifying processes occur when air passes through the evaporator, the heating process occurs when air passes through the condenser, and the evaporative cooling process occurs when air is drying the clothes. in the sensible cooling process, the process takes place in the evaporator. dryball air temperature and wet-ball air temperature decrease. the sensible cooling process runs with a fixed humidity-specific value. before the air undergoes the 0 2 4 6 8 10 12 0 100 200 300 400 500 m a ss o f c lo th e s , k g time, minutes 15 clothes 25 clothes 40 clothes international journal of applied sciences and smart technologies volume 5, issue 1, pages 39-54 p-issn 2655-8564, e-issn 2685-9432 49 sensible cooling process, the air condition is at point a which has a dry bulb air temperature tdb,a and a wet bulb air temperature with twb,a. at the end of the process, the air condition is at point b, with dry-ball air temperature expressed by tdb,b which is lower than tdb,a and wet-ball air temperature by twb,b which is also lower than tdb,b . the results showed that at the end of the sensible cooling process, the dry-bulb air temperature and wet-ball air temperature had the same values tdb,b = tdb,b and had a relative humidity (rh) value of 100%. thus the rh of the air has increased until it reaches 100% rh. in the sensible cooling process, heat is released from the air, and the heat released by the air is absorbed by the evaporator to be used to boil some of the refrigerant flowing in the evaporator pipes. in the process of cooling the air which is accompanied by condensation of water vapor from the air (cooling and dehumidifying), the process takes place at a constant relative humidity of 100%, but the dry bulb air temperature and wet bulb air temperature decrease. this process can occur because the working temperature of the evaporator (tevap) is much lower than the temperature of the condensation of water vapor in the air. this causes in this process the condensation of water vapor from the air. the water content in the air decreases. a specific moisture reduction process occurs. when the air temperature leaves the evaporator, the dry bulb air temperature is expressed by tdb, c and the wet bulb air temperature is expressed by tdb, c. the condition of the air coming out of the evaporator can be said to be dry. even though it has an rh of 100%, the water content in the air is low. in this process, air releases heat, because in the phase change process from water vapor to the liquid phase (condensation) it always releases heat. the heat released by the air is also used to boil some of the refrigerant flowing in the evaporator pipes or is used to change the phase from a liquid and gaseous refrigerant mixture to a saturated gas. the total amount of heat absorbed by the evaporator from the air in the sensible cooling process and in the cooling and dehumidifying process, the refrigerant mass unit is expressed in qin. the process of heat absorption in the evaporator through the surface of the fins installed on the evaporator pipes and through the outer surface of the evaporator pipes. in order for the heat absorption process through the fins and the pipe surface to run quickly, the fin material is selected from aluminum and the pipe international journal of applied sciences and smart technologies volume 5, issue 1, pages 39-54 p-issn 2655-8564, e-issn 2685-9432 50 material is selected from copper. both of these materials have relatively high thermal conductivity values and affordable selling prices. a fan is installed near the evaporator to increase the speed of the air flow. by increasing the air speed, it will increase the value of the convection heat transfer coefficient. the higher the value of the convection heat transfer coefficient, the greater the rate of heat transfer that occurs from the air fluid to the refrigerant flowing in the evaporator. coming out of the evaporator, air is passed through the condenser. when it passes through the condenser, the process of heating the air occurs. in this process, the process runs at a fixed specific humidity. the temperature of the dry bulb air and the wet bulb air temperature rising from the evaporator increases, from tdb,c and twb,c to tdb,d and twb,d. the air temperature rises because the working temperature of the condenser is higher than the air temperature. there is a flow of heat from the refrigerant flowing in the condenser into the air. refrigerant can release heat, because the refrigerant in the condenser experiences a decrease in temperature from superheated gas to saturated gas (desuperheating process), and undergoes a phase change from a saturated gas phase to a saturated liquid phase (condensation process). the condenser releases heat into the air through the fins on the pipes and through the outer surface of the condenser pipes. inside the condenser finned pipes, there is a process of decreasing temperature and the process of refrigerant condensing which releases heat in the evaporative cooling process the dry bulb air temperature (tdb,d) decreases, while the wet bulb air temperature remains (twb,d). in an ideal process, the process runs at a fixed enthalpy value. the evaporative cooling process occurs when the air coming out of the condenser passes through clothes that are hung on hangers in the drying chamber. when air passes through the clothes, the water in the wet clothes evaporates, and the water moves from the clothes into the air. clothes dry. the water phase changes from the liquid phase to the water vapor phase. in the process of changing the phase from the liquid phase to the water vapor phase, it requires heat, and the heat is taken from the air that passes through the clothes. hence the dry bulb air temperature decreases. the amount of sensible heat provided by air is equal to the amount of latent heat used for the process of evaporating water from clothes. in this international journal of applied sciences and smart technologies volume 5, issue 1, pages 39-54 p-issn 2655-8564, e-issn 2685-9432 51 process there is an addition of water content in the air, then the specific humidity of the air increases. from table 4, it appears that the length of time for drying clothes depends on the amount of clothes being dried or depending on the mass of clothes to be dried. the more clothes or the larger the mass of clothes to be dried, the longer the drying time will be. this is because the more clothes or the greater the mass of clothes that are dried, the more water has to be evaporated from the clothes. the degree of dryness obtained on clothing is evenly distributed for all clothing and in all clothing positions. everything dries evenly like solar energy drying does. from table 4 it can be seen that the average temperature before passing through the evaporator is 38.6oc and the air temperature leaving the condenser is an average of 53.87oc. some of the advantages of using a clothes dryer using a vapor compression cycle-based heat pump are: practical, safe and comfortable, environmentally friendly, can be used anytime and anywhere. the potential for fire is small, as there is no fire, oxygen and fuel. oven work does not make noise and does not cause hot air conditions outside the oven. the environment is kept clean. because it uses electrical energy, the clothes dryer is easy to operate and doesn't bother the user. the use of dryers does not pollute the environment, because there is no combustion process that produces exhaust gases. the dryer can work regardless of the weather. can be done during the dry season or rainy season, morning, afternoon, evening or night, indoors or outdoors. 4. conclusion the conclusions of this study are (a) electrical energy clothes dryer using a vapor compression cycle-based heat pump has a performance (cop) of 7.94 (b) the time needed for a clothes dryer using a heat pump to dry 15 clothes, 25 clothes and 40 clothes respectively takes 210 minutes, 330 minutes and 450 minutes. acknowledgements the author would like to thank the head of lppm sanata dharma university for the grant support provided, so that this research can be carried out and completed properly and smoothly. to the head of mechanical engineering study program, international journal of applied sciences and smart technologies volume 5, issue 1, pages 39-54 p-issn 2655-8564, e-issn 2685-9432 52 deputy dean i and dean of fst sanata dharma university, we also express our deepest gratitude for the permission and for all the support given. references [1] syarif hidayat, suparman karnasudirdja, sifat pengeringan alami dan pengeringan sinar matahari sebelas jenis kayu asal kalimantan barat, jurnal penelitian hasil hutan forest product research journal 2 (2)(1985) 5-9. [2] saptariana, meddiati fajri putri, titin agustina, peningkatan kualitas produksi rengginang ketan menggunakan teknologi pengering buatan, rekayasa jurnal penerapan teknologi dan pembelajaran 12 (1) (2014) 10-14. [3] doddy purwadianto, petrus kanisius purwadi, karakteristik mesin pengering emping jagung energi listrik, prosiding seminar nasional universitas respati yogyakarta 1 (2) (2019) 116-123. [4] petrus kanisius purwadi, mesin pengering kapasitas limapuluh baju sistem tertutup, jurnal ilmiah widya teknik 16 (2) (2017) 91-95. [5] wibowo kusbandono, petrus kanisius purwadi, effects of the existence of fan in the wood drying room and the performance of the electric energy wood dryer, international journal of applied sciences and smart technologies 03 (1) (2021), 83–92. [6] petrus kanisius purwadi, a. prasetyadi, characteristics of wooden furniture drying machine, international journal of applied sciences and smart technologies 04 (1) (2022), 75-88. [7] m adhi prasnowo, shafiq nurdin, analisis kelayakan mesin pengering keripik kentang, agrointek jurnal teknologi industri pertanian 13 (1) (2019), 10-13. [8] tri mulyanto, supriyono, proses manufaktur mesin rotari tipe hibrida untuk pengering cabai, jurnal asiimetrik: jurnal ilmiah rekayasa & inovasi. 1 (2) (2019) 125-132. [9] dian morfi nasution, himsar ambarita, safri gunawan, studi awal desain dan pengujian sebuah mesin pengering hibrida pompa kalor dan tenaga surya, proseding seminar nasional rekayasa (sntr) iii. (2016) 173-177. international journal of applied sciences and smart technologies volume 5, issue 1, pages 39-54 p-issn 2655-8564, e-issn 2685-9432 53 [10] sari farah dina, harry p. limbong, siti masriani rambe, rancangan dan uji performansi alat pengering tenaga surya menggunakan pompa kalor (hibrida) untuk pengeringan biji kakao, jurnal riset teknologi industri,12 (1) (2018) 2233. [11] efriwandy simbolon, jannifer alfredo, briand steven kaligis, moh. fikri pomalingo, ichiro davidson piri, siti vahira cantika kawuwung, desain dan pabrikasi mesin pengering cengkeh berbahan bakar limbah kelapa untuk mempercepat proses penjemuran cengkeh, jurnal keteknikan pertanian tropis dan biosistem, 10 (1) (2022) 21-28. [12] pradeep bansal, amar mohabir, william miller, a novel method to determine air leakage in heat pump clothes dryers, elsevier,96 (1) (2016) 1-7. [13] ward tegrotenhuis, andrew butterfield, dustin caldwell, alexander crook, austin winkelman. modeling and design of a high efficiency hybrid heat pump clothes dryer, elsevier, applied thermal engineering 24 (2017) 170-177. [14] cakra m.a., himsar ambarita, taufiq b. n, alfian hamsi terang uhs ginting, pramio g. s, karakteristik laju pengeringan pada mesin pengering pakaian sistem pompa kalor, jurnal dinamis, 4 (3) (2016) 1-13. [15] gordon httm, azridjal aziz, rahmat iman mainil, karakteristik pengujian pada mesin pengering pakaian menggunakan air conditioner (ac) ½ pk dengan siklus udara tertutup, jurnal sains dan teknologi, 16 (1)(2017), 24-30. [16] wibowo kusbandono, the effect of water impact on the refrigerant pipeline between compressor and condensor on cop and efficiency of cooling machine, international journal of applied sciences and smart technologies, 3 (2) (2021), 203–214. https://www.sciencedirect.com/author/13807793200/william-anthony-miller https://www.sciencedirect.com/journal/energy/vol/96/suppl/c international journal of applied sciences and smart technologies volume 5, issue 1, pages 39-54 p-issn 2655-8564, e-issn 2685-9432 54 this page intentionally left blank international journal of applied sciences and smart technologies international journal of applied sciences and smart technologies volume 3, issue 1, pages 11–26 p-issn 2655-8564, e-issn 2685-9432 11 artificial generation of realistic voices dhruva mahajan1,*, ashish gapat1, lalita moharkar1, prathamesh sawant1, kapil dongardive1 1department of electronics and telecommunication engineering, xavier institute of engineering, mahim, mumbai, maharashtra, india *corresponding author: dhruvam17@gmail.com (received 17-07-2020; revised 27-08-2020; accepted 15-09-2020) abstract in this paper, we propose an end-to-end text-to-speech system deployment wherein a user feeds input text data which gets synthesized, variated, and altered into artificial voice at the output end. to create a text-to-speech model, that is, a model capable of generating speech with the help of trained datasets. it follows a process which organizes the entire function to present the output sequence in three parts. these three parts are speaker encoder, synthesizer, and vocoder. subsequently, using datasets, the model accomplishes generation of voice with prior training and maintains the naturalness of speech throughout. for naturalness of speech we implement a zero-shot adaption technique. the primary capability of the model is to provide the ability of regeneration of voice, which has a variety of applications in the advancement of the domain of speech synthesis. with the help of speaker encoder, our model synthesizes user generated voice if the user wants the output trained on his/her voice which is feeded through the mic, present in gui. regeneration capabilities lie within the domain voice regeneration which generates similar voice waveforms for any text. international journal of applied sciences and smart technologies volume 3, issue 1, pages 11–26 p-issn 2655-8564, e-issn 2685-9432 12 keywords: speech synthesis, speaker encoder, synthesizer, text-to-speech, vocoder 1 introduction in the near future, everything around us will be voice operated. with the growing trends of alexa and google home, advancements are being made to create an environment of artificial intelligence and its operating medium would be voice. with the advent of signal processing, voice signals have extensive upgradation in terms of global standards and kept on challenging for better platform in all its forms of uses. right from voice screeching out of airhorn to voice search engines in smart phones, voice applications have always been vital in any or all of their feats. artificial generation itself is a form of voice cloning that is implemented with the help of neural nets that help with generating the mel-spectrograms. this process intensifies on the input, where the characters present are synthesized into signal waveforms which are digitally spectograms. these spectograms are then coupled and then linked through a vocoder, which generates voice corresponding to the characters that are given as input. the goal of the this paper is to build a tts system which can generate natural speech for a wide variety of speakers which are absent throughout the process. our model can run in real time by implementing the mic function. this is possible by achieving a powerful form of voice cloning. the output must be aligned in such a manner that runs on correct lines to provide a clone of the dataset that is trained within the system. the output speech must allign with the exact speaker voice picked from the dataset. this voice gets matched with the help of rnn, which cycles the implementation process wherein the dataset voice gets linked with input text. this implementation works throughout for all the speakers whether present or absent. when a speaker implements mic function, the speaker encoder does not operate. uncertainties do arise as we train the model with the speaker’s reference speech (trained dataset): • the output utterance is slightly composed. • dataset voices could be identical, depending on the dataset picked (vctk and librispeech). international journal of applied sciences and smart technologies volume 3, issue 1, pages 11–26 p-issn 2655-8564, e-issn 2685-9432 13 • voice output obtained, when implementing through the mic tends to be rough. this is because while recording in real time, it is very important to check the environment around. mic integrated with system is highly sensitive and tends to pick smallest utterance. • recording a large amount of high-quality data for many speakers seems rather impractical. the approach which we deploy, decouples speaker modeling from speech synthesis by independently training a speaker effective embedding network that traces and sequentially captures the character space of speaker characteristics and training a highquality tts [1], [2]. taking findings from a 2017 research paper, googletacotron, where we figure out the synthesis and voice generation process [3]. we took implementation steps from a deep learning architecture that we researched and coupled it with wavenet [3], [4]. wavenet is a neural network which acts as a vocoder, to convert mel-spectograms into corresponding voice signals. we now had to check the process and operation for which we used two public datasets that are librispeech and vctk. after the respective implementation we got our process confirmed for synthesis and started the model. to run with time and technology, we implement a neural network architecture of our research pertaining to synthesis of voice. this research model is a deep learning text-to-speech model. text-to-speech describes that a string of text would be converted to speech output. there are three major operations. • speaker encoderwhere the embedded text feeded in is sent to a convolution bank highway network. this convolution bank is nothing but a deep learning tool which breaks our text in separate characters. breaking the string of characters allows the network to work on each character modulation, making it more in tune with voice rather sounding modulated. i.e. for examplegood morning in the convolution bank will be presented as g-o-o-d m-o-r-n-i-n-g. • synthesiswe implement trained datasets in our paper. these trained datasets arefeeded and loaded before operating the model. during the synthesis our character embedding which is disintegrated character by character is sent to attention, where the character is coupled with trained dataset voice signal. international journal of applied sciences and smart technologies volume 3, issue 1, pages 11–26 p-issn 2655-8564, e-issn 2685-9432 14 • vocoderthis is the final part of our model where the signals which are bindedas spectrogram is converted into voice. this operation uses wavenet neural network [5]. (wavenet was developed in deepmind labs, which is a ai research wing of google.) wavenet as a vocoder acts as a binder, where all end results (melspectrograms obtained) are combined. these results are the voice signal modulation that take place along with each character given as input. explaining mathematically, in theory, our input text gets converted into an algebraic equation (which consists are input characters), this equation is then simplified by taking k constant. (k constant here is the dataset selected). this simplified equation is then collected and generated into series function equation (this is the desired output). with spectrograms for these character strings, this waveform passes through the wavenet which perform natural language processing, giving voice as the end result. 2 methodology text-to-speech implementation for a speaker model to pick up character embeddings, the model needs to be well disintegrated in three subsets as clarified. these subsets work in a sequential manner providing speech in a uniform and desired manner. putting light on each model now will magnify each role clearly with technical overviews and details. see figure 1 for the block diagram. figure 1. block diagram speaker encoder illustration is shown in figure 2. speaker encoder plays an important role by estimating the embedding from various text characters that is feeded as input. while implementing text-to-speech model, it’s up to the developer by what means it should be operated. to cut the unnecessary word-word translation by using a conventional text-to-speech system, where the text is sent in a string and the output obtained has passive speech, we use deep learning architecture, where the input text is international journal of applied sciences and smart technologies volume 3, issue 1, pages 11–26 p-issn 2655-8564, e-issn 2685-9432 15 feeded as the prenet (pre-feeded information/local information). this prenet is then sent to 1d-cbhg. this is one-dimensional convolution bank highway network. this block is the place where the string is broken in separate character by character formation [5]. example: good g-o-o-d. figure 2. speaker encoder synthesizer is illustrated in figure 3. functions of daily text-to-speech is evident in our voice search engines, smart electronics and various voice modulating devices that inherit the use of synthesizer. text-to-speech control system employs an easy-to-use transport medium from which a user can control most text-to-speech variating functions without any prior training. now, the synthesizer basically works in the areas of translating a plurality of discontinuous user-selected part of text in an independent form of target application into an audio output that resembles the sound of human speech. to generalize, the synthesizer is the core of the text-to-speech engine, where it mainly focuses on tracing and adapting the text input in a sequence and delivers audio sample of it [5]. figure 3. synthesizer block in the block diagram represented above, the speaker encoder gets linked to the concat block. in this concat block, the character broken down in the cbhg, gets international journal of applied sciences and smart technologies volume 3, issue 1, pages 11–26 p-issn 2655-8564, e-issn 2685-9432 16 forwarded as a string with intervals between each character to the attention. attention is a deep learning tool where each character is concentrated and constant is removed, and binded with the trained dataset. this trained dataset is forwarded from the decoder, which acts as a gate for audio signal and couples it with the data in concat. the constant removed is called ‘k-constant’, which is used for signal processing. vocoder: simplicity in the output should be in a form such that all the speech is synthesized in the manner that the corresponding input is heard in tone that is tuned to match the tempo of the unseen speakers’ voice. for this purpose, a vocoder is used where it captures the characteristic elements of the audio signal and then uses this characteristic signal to affect other audio signals. it is also dubbed as a “talking synthesizer” for its ability to fine-tune the synthesized signal in accordance to vocal frequency. see figure 4 for log mel-waveforms. figure 4. log mel-waveforms the vocoder provides a bank of multiple bandpass filters which dissociate the input signal into narrow spectral slices. consider, we excite channel ‘k’ of vocoder with the input signal 𝑎(𝑛𝑇)cos(𝑤𝑘𝑛𝑇) for 𝑛 = 0,1,2,3,4,5, … where 𝑤𝑘 is the centre frequency international journal of applied sciences and smart technologies volume 3, issue 1, pages 11–26 p-issn 2655-8564, e-issn 2685-9432 17 of the channel in radians per second, 𝑇 is the sampling interval in seconds and bandwidth of 𝑎(𝑛𝑇) is smaller than channel bandwidth. we regard this input signal as an amplitude modulated sinusoid. the component cos(𝑤𝑘𝑛𝑇) can be called the carrier wave, while 𝑎(𝑛𝑇) > 0 is the amplitude envelope. if the phase of each channel filter is linear in frequency within the passband (or at least across the width of the spectrum) and if each channel filter has a flat amplitude response in its passband, then the filter output will be, by the analysis of the previous section. 𝑌𝑘(𝑛) ∼ 𝑎[𝑛𝑇 − 𝐷(𝑤𝑘)]cos(𝑤𝑘[𝑛𝑇 − 𝑃(𝑤𝑘)]) creating a gui is illustrated in figure 5 as follows. with the reach of applications with software, the main over layer and the visible interaction is the graphical user interface (gui) which allows dynamic ability to the user for its functioning [6]. for our model we intended the working on an interface to allow the user to interact with the model. creation of gui also implements the ease of functions laid out at disposal of the user within a fixed framework. gui requires the interaction to be in a flow that does not hamper the user and neither causes any sort of imbalance within the process. taking for consideration, our model is heavily based on synthesis, making it completely oriented to user interaction (input/character embedding). so, with this, the gui should be in a manner that allows easy flow of task within the same framework. to create the gui for our model, we implement tkinter library of python. within the tkinter package, there are many functions that are used to make things more organized and presentable. tkinter allows us to make various frameworks, buttons and organizes the functions systematically. major properties to include in the gui is illustrated in figure 5. figure 5. main gui components international journal of applied sciences and smart technologies volume 3, issue 1, pages 11–26 p-issn 2655-8564, e-issn 2685-9432 18 3 implementation real time voice cloning (using sv2tts): the entire approach to real time voice cloning is adapted on the basis of transfer learning from speaker. i.e. dubbed as prosody transfer (voice styling implementation). it is a speaker verification to multispeaker text-to-speech synthesis. it essentially defines the framework for voice cloning that barely requires 4-6 seconds of reference speech. it is majorly dependent on three early works from google: the ge2e loss (wan et al., 2017), tacotron (wang et al., 2017) and wavenet (van den oord et al., 2016). this proposed model is three-stage pipeline, as listed above in order. the google cloud services, google search engine, or google assistant make use of these same models. model architecture of speaker encoder, synthesizer and vocoder are shown in figure 6. figure 6. three-stage pipeline for text-to-speech model architecture of speaker encoder: it is a three-layer lstm with 768 hidden nodes which is followed up by a projection layer comprising of 256 units. although there is no reference in any of references present, as to what the projection layer defines. hence, we round up to consider the overall function of the projection layer as a 256output fully connected layer (per ltsm) which is repeatedly applied to each and every output of the ltsm. the inputs to this model are 40-channels log-mel spectrograms international journal of applied sciences and smart technologies volume 3, issue 1, pages 11–26 p-issn 2655-8564, e-issn 2685-9432 19 with 25ms window width and a 10ms step. the desired output is the l2 normalized hidden state of the last layer, which works as a vector of 256 elements. model architecture of synthesizer: in the synthesizer implementation, the target mel spectrograms present for the synthesizer provide more features than those used for speaker encoder which are computed from a 50ms window with a 12.5ms step and have 80 channels. we use a python implementation of logmmse algorithm, which is used for filtering the audio speech by erasing the noise in the early frameworks. consequently, we train the synthesizer for 150k steps, comprising of a batch size of 144. the outputs per second is set to 2 for the decoder. while implementing, the architecture tends to provide speech synthesis to attain identical cloning to the unseen speaker. during this process there are losses that are accounted in the verge predicted and ground truth mel spectrograms. this is the l2 loss function. while training, the model is set to ground truth aligned (gta). the reason for this is that if we do not set the model to gta, then the synthesizer would produce different variations of the same utterance (text or embedding). implementation of neural nets are shown in figure 7. figure 7. implementation of neural nets model architecture of vocoder the vocoder implemented in the model is wavenet [1, 4]. wavenet produces naturalness in tts. this is the primary reason it is fairly used in tacotron and sv2tts. however, the efficiency of this neural net is too international journal of applied sciences and smart technologies volume 3, issue 1, pages 11–26 p-issn 2655-8564, e-issn 2685-9432 20 good coming at the cost of the speed. it is very slow and the slowest deep learning architecture at interference time. even if this is a matter of stress on the implementation part, there are improvements that can be initiated on it. googles own vocoder works at the rate of output of 8000 samples per second, which is by far not bad for a neural net. the model implemented is an open-source pytorch implementation that is based on an rnn model deployed by github user fatchord. zero-shot speaker adaption the speech characteristics to be synthesized are picked up from the audio signal. zero-shot adaption is the ease with which the model gets linked with the training data from the unseen speaker which is not present during the training. it just requires time of 4-6 seconds for the model to generate new speech by synthesizing the speaker characteristics. interference can cause the speaker information to get synthesized without knowledge of the input fed through. however, in our model, interference occurs with arbitrary untranscribed speech audio which does need the text to match with the synthesizer, thereby making the implementation hassle free and comparatively quick. dataset used two datasets which are public by nature for synthesis and speech training for vocoder network is used for implementation. vctk comprises of 43 hours of clean speech from 109 speakers, among which most have british accents. we down sampled the audio to 25 khz, trimmed leading and trailing silence (reducing the median duration from 3.3 seconds to 1.8 seconds). it’s been split into three subsets: train, validation which has the same speakers as the train set and test which has 11 speakers held out from the train and validation sets. librispeech comprises of the union of the two “clean” training sets, consisting 436 hours of speech from 1,172 speakers, sampled at 16 khz. the speech is us english majorly, however since it is sourced from audio books, the vocals and style of speech can differ significantly between utterances from the same speaker. we reassembled the data into shorter utterances by force aligning the audio to the transcript using an automatic speech recognition (asr) model and breaking segments on silence, reducing the median duration from 14 to 5 seconds. naturalness of speech clearer the person’s voice is, crisper is the audibility factor that follows. this might tend to differ in cases of speech synthesis, where natural speech gets encoded and decoded with help of speech synthesizers and in our case deep international journal of applied sciences and smart technologies volume 3, issue 1, pages 11–26 p-issn 2655-8564, e-issn 2685-9432 21 learning networks too. so, to arrange it cordially, there is prenet data, also termed as local information that gets coupled with character embeddings from the input. there is interference that plays a major role and thereby noise gets added on while processing through the highway networks. with subsequent processing and ability to trace character by character (using fourier transform) we get the log-mel spectrogram through vocoder. after attaining the desired output, the major question is how much of the part is natural speech. as we implement two public datasets, we tried comparing the vctk and librispeech datasets to check the speech synthesis through synthesizers and vocoder. we tried comparing by using 11 unseen and seen speakers for vctk and 10 unseen and seen speakers for librispeech. each of the comparison was conducted independently [2]. comparison of time taken for synthesis is listed in table 1. table 1. comparison of time taken for synthesis system speaker information vctk librispeech ground truth same speaker 4.67 ± 0.04 4.33 ± 0.08 ground truth same gender 2.25 ± 0.07 1.83 ± 0.07 ground truth different gender 1.15 ± 0.04b 1.04 ± 0.03 embedding table seen 4.17 ± 0.06 3.70 ± 0.08 proposed model seen 4.22 ± 0.06 3.28 ± 0.08 proposed model unseen 3.28 ± 0.07 3.03 ± 0.09 speaker similarity and verification to check the speech having cleaner detail, we check the results of the above comparison of the two datasets. this is done to check whether the desired output is identical to the input given. from the comparison table, we get to know that the values delivered by vctk tend to be an edge above the librispeech dataset. the speech from the vctk dataset is cleaner. this can be also understood by higher ground truth baselines in the vctk. that makes the vctk better, but on using librispeech on vctk model, it was visible that the output was better than that of vctk model. this means that depending on the dataset and type of groundtruth, the similarity can be accomplished. for the part of speaker verification, on librispeech, the synthesized speech is at most similar to the ground truth voices. the international journal of applied sciences and smart technologies volume 3, issue 1, pages 11–26 p-issn 2655-8564, e-issn 2685-9432 22 librispeech synthesizer obtains similar eers of 5-6 % using reference speakers from both datasets, whereas the one trained on vctk performs much worse, especially on out-of-domain librispeech speakers. 4 results and discussion figure 8. proposed-method having all the details and implementation incorporated, we got our proposed design (see figure 8) that would be implemented by us. we had to take in consideration every part as sequential support that will provide proportional synthesis and deliver the desired output. we had two procedures followed to obtain results. method 1: in the first method we used the train datasets for speech synthesis, which were librispeech and vctk respectively. the output obtained was clear and audible with the help of headphones. in open environment, the voice felt a little light, and required a speaker with an equalizer. method 2: this method is an advancement to our paper which could be extended further successfully with correct implementations. we tried feeding in our voice through international journal of applied sciences and smart technologies volume 3, issue 1, pages 11–26 p-issn 2655-8564, e-issn 2685-9432 23 the mic, and tried cloning. the speech did get synthesized but the output obtained wasn’t as clear as the previous case and was little gibberish in nature. audio quality analysis: the quality audio can be better heard by using speakers with better bass and equalizer in an open environment. there is a 2-3 seconds lag in setting the audio sample from the dataset which, again can be improved by trimming the audio samples. within all testing periods, the model was reliable and highly efficient in terms of user interface. further extension is possible by implementing user generated voice (refined), and miscellaneous dataset implementation of various accents and dialects. now, in accordance to the above model, we had to implement a gui (see figure 9), that can provide subsequent functioning of the above model and also should be user friendly to operate. we implemented python tkinter package in accordance to the system model and created the model that equips and provides most of the proposed system through easy flow of framework design. figure 9. output gui 5 conclusion the entire concept of having voice cloning has always had some advancements tending to always grow for the better. incorporating all the factors, we try instill the international journal of applied sciences and smart technologies volume 3, issue 1, pages 11–26 p-issn 2655-8564, e-issn 2685-9432 24 model with all datasets and pretrained data through which the model infers and tries to pullout the desired output claimed. to attain the desired output, we need to verify the speech to be identical to the trained unseen speech, so as to be sure. if the speech synthesis is not identical, we cannot term the output as the desired output. such models have implications and applications mostly in the armed forces for providing stealth mode a new edge. voice cloning can help make communication between borders, across seas and even on telephone a rounded-up mystery, which is the reason it benefits the armed forces greatly. the application also lies with media. people tend to use voice cloning for media concentration applications or entertainment media such as dubsmash. this model also helps with regeneration of voice wherein, a certain person can communicate if he/she is disabled or lost ability of speech under certain circumstances. the voice cloning works for the future, wherein with on-time upgradation, and correct use we can use regeneration factor for taking over music and pop culture with a wave. with a sytem having a multi-area application domain. it is very important to place certain regulations and boundaries bounded by legal clauses for implementation. major part of the model lies within the use of the text-to-speech system, which is the backbone of the model. many systems integrated in artificial intelligence pick text-tospeech system as prime domain because of its broad area of implementation, right from voice assist, voice detection to voice cloning. our model works with voice cloning and moves in direction of voice regeneration, which will be a major breakthrough in the near future. the proposed model does not attain human-like naturalness, despite the use of a wavenet vocoder (along with its very high inference cost), in contrast to the single speaker results. this is a consequence of the additional difficulty of generating speech for a variety of speakers given significantly less data per speaker, as well as the use of datasets with lower data quality. acknowledgements the corresponding author acknowledges all the co-authors and group members for co-operating and working with an optimistic mindset. every work related to the paper required dedicated and devoted attention from the department in association, and international journal of applied sciences and smart technologies volume 3, issue 1, pages 11–26 p-issn 2655-8564, e-issn 2685-9432 25 personal guidance from the project guide, prof. lalita moharkar who stood by all along the buildup of the research model. trying to walk the entire path from scratch required detailed reference help which acted as a walking stick, which were journal papers and technical papers present in international journals and publications. references [1] l. wan, q. wang, a. papir and i. l. moreno, “generalized end-to-end loss for speaker verification.” acoustics, speech and signal processing (icassp), ieee international conference, 2018. [2] y. jia, y. zhang, r. j. weiss, q. wang, j. shen, f. ren, z. chen, p. nguyen, r. pang, i. l. moreno and y. wu, “transfer learning from speaker verification to multispeaker text-to-speech synthesis.” advances in neural information processing systems, 31, 4485–4495, 2018. [3] artificial intelligence at google – our principles. https://ai.google/principles/, 2018. [4] a.v.d. oord, s. dieleman, h. zen, k. simonyan, o. vinyals, a. graves, n. kalchbrenner, a. senior and k. kavukcuoglu. “wavenet: a generative model for raw audio.” arxiv preprint 1609.03499, (2016). [5] j. shen, r. pang, ron j. weiss, m. schuster, n. jaitly, z. yang, z. chen, y. zhang, y. wang, r. j. skerry-ryan, r. a. saurous, y. agiomyrgiannakis and y. wu. “natural tts synthesis by conditioning wavenet on mel spectrogram predictions.” proceedings of the ieee international conference on acoustics, speech, and signal processing (icassp), 2018. [6] m. grechanik, q. xie and c. fu, “creating gui testing tools using accessibility technologies.” proceedings of the ieee international conference on software testing, verification, and validation workshops, 243–250, 2009. international journal of applied sciences and smart technologies volume 3, issue 1, pages 11–26 p-issn 2655-8564, e-issn 2685-9432 26 this page intentionally left blank international journal of applied sciences and smart technologies international journal of applied sciences and smart technologies volume 1, issue 2, pages 85–100 p-issn 2655-8564, e-issn 2685-9432 85 measuring privacy leakage in term of shannon entropy ricky aditya 1,* , boris skoric 2 1 department of mathematics, faculty of science and technology, sanata dharma university, yogyakarta, indonesia 2 security group, eindhoven university of technology, eindhoven, the netherlands * corresponding author: y_ricky_aditya@yahoo.com (received 17-05-2019; revised 28-06-2019; accepted 31-07-2019) abstract differential privacy is a privacy scheme in which a database is modified such that each user’s personal data are protected without affecting significantly the characteristics of the whole data. example of such mechanism is randomized aggregatable privacy-preserving ordinal response (rappor). later it is found that the interpretations of privacy, accuracy and utility parameters in differential privacy are not totally clear. therefore in this article an alternative definition of privacy aspect are proposed, where they are measured in term of shannon entropy. here shannon entropy can be interpreted as number of binary questions an aggregator needs to ask in order to learn information from a modified database. then privacy leakage of a differentially private mechanism is defined as mutual information between original distribution of an attribute in a database and its modified version. furthermore, some simulations using the matlab software for special cases in rappor are also presented to show that this alternative definition does make sense. international journal of applied sciences and smart technologies volume 1, issue 2, pages 85–100 p-issn 2655-8564, e-issn 2685-9432 86 keywords: differential privacy, rappor, shannon entropy, mutual information, privacy leakage. 1 introduction in digitalized era when many things can be done online, privacy becomes a more serious issue, especially if our personal data have to be submitted online for some reasons. even with their published privacy policies (something that most users never read it properly), there are some room for privacy violations. here we will not talk about the hackers or any outsiders, because the ones who violate privacy might come from the authorized parties. the most annoying case is when some parties use their authorities to leak someone’s private data but there is no laws or rules which can conclude it as a privacy violation and therefore they cannot be punished. for example, our medical record data which are recorded in a hospital’s database. our data, together with other persons’ data, might be used by other parties who want to learn something from the database, let us say a medicine company or a medical research center. we never know if they really just access the database for gaining only the necessary information, or they may search for our personal data. a basic and simplest way to prevent this is by hiding the names of data owners, i.e. making the data to be anonymous. unfortunately, this may be not enough to protect our private data. they can still access any other data, such as height, weight, age, gender, etc. consider some persons with a very rare attribute, for examples : too tall, too short, too fat, too thin, and many more. by looking at one specific attribute or two, they can uniquely determine them and as consequence, can leak their private information. they, of course, violate those persons’ privacy but we cannot say that they break any laws or rules in the privacy policies. suppose that someone is famous as the tallest guy in his/her city. roughly saying, as long as they do not ask the hospital who the tallest guy in this database is, and the hospital do not inform it either, no laws or rules are broken. based on this kind of issues, many data security researchers try to create a new privacy protocol to protect any private information. one of them is called as differential privacy. the idea is to modify the original database such that each user’s personal data international journal of applied sciences and smart technologies volume 1, issue 2, pages 85–100 p-issn 2655-8564, e-issn 2685-9432 87 are protected but characteristics of the whole database do not change significantly. therefore other parties are still able to learn any information about the whole database but they are unable to learn any personal information. as a very simple example, there are five persons : a, b, c, d and e. the fact is a and b are smokers, while the others not. after modification, the smokers become c and e. here the fact that a and b are smokers is hidden, but it preserves the fact that two of those five persons are smokers. note that other parties know that the database has been modified, so they cannot judge c and e as smokers. therefore if they just want to know the proportion of smokers in the database, they will not get it wrong but they will not know who the real smokers are. in practical case, of course, we will work on much larger database with various attributes. we do not have to preserve the exact proportion of any attributes, but we need to keep it with a small margin of errors. the concept of differential privacy will be discussed in the next section, together with some specific mechanisms which can be used. 2 differential privacy and rappor the idea of differential privacy came first in dwork et.al. [1] in 2006. in their work, an idea to protect privacy by adding noise to the data is introduced. at that time, it had not been named as differential privacy, the name came later after some subsequent research. after few years working thoroughly on this area, a more comprehensive concept of differential privacy are later published in dwork and roth [2]. concepts and definitions in this section are based on [1] and [2]. 2.1 differential privacy now we go to the definition of differential privacy. let a database is represented in a table in which the rows represent the users and the columns represent the attributes. sometimes the parties who have authorized access to the database only need to take some samples of users and not all of them. we do not always know what they want to look for, but we can assume that they have full authorities to do so. international journal of applied sciences and smart technologies volume 1, issue 2, pages 85–100 p-issn 2655-8564, e-issn 2685-9432 88 we say that two sub-databases are neighboring to each other if one is obtained by adding or deleting one row from the other. if the database is not modified, it is possible to learn about one specific user by learning two databases : one database that containing him/her and the database that is obtained by eliminating him/her from the previous one. therefore, in order to protect that user’s privacy the modification mechanism needs to eliminate this possibility. this leads to a definition of differentially private mechanism. definition 2.1. let a be a mechanism to modify a database d with as output. the mechanism a is said to be ( )-differentially private, where both ε and δ are nonnegative numbers, if for any neighboring sub-databases 1 x and 2 x of , and for any subset of , it satisfies :    1 2pr pra x s e a x s             (1) equation (1) can be interpreted as the outputs from two neighboring databases has only very small and insignificant difference such that (almost) nothing can be learned about the user who differs them. if the numbers ε dan δ be smaller, then the differences become more insignificant and the privacy becomes stronger. in some specific cases, the parameter δ in (1) is set to be 0 and then the mechanism is said to be εdifferentially private. now we talk about the accuracy of a differential privacy mechanism. in this context we are concerned about information from a database which can be used to answer predicate counting queries. the class of those queries is called as concept class, usually denoted by . set of any possible values of a database is called as data universe, usually denoted by . output of a predicate counting query on a database , denoted by ( ) is the proportion of elements in which satisfy that predicate. for example, proportion of smokers or proportion of patients with heart problem in a medical record database. then we have this definition of accuracy. definition 2.2. for any c c , a mechanism a on database x is said to be α-accurate for c if     c x c a x   . moreover, a is said to be (α,γ)-accurate for a concept class c if a is α-accurate for (1-γ) fractions of queries in c. international journal of applied sciences and smart technologies volume 1, issue 2, pages 85–100 p-issn 2655-8564, e-issn 2685-9432 89 above definition can be interpreted as even though each personal data has been modified, but the proportion of users who satisfy a predicate does not change significantly. we need α to be smaller for a better accuracy. considering that it is very difficult to create a mechanism that can be accurate for all queries in a concept class, then parameter γ is introduced. if γ is smaller, then more queries can be answered accurately. furthermore, an utility parameter of a mechanism can also be defined based on its accuracy parameter. definition 2.3. let c be a concept class and x is a data universe. a modification mechanism a is said to have (α,β,γ)-utility with respect to c and x if for a database x it holds that  pr is -accuratea       . there are several kind of mechanisms which can be used to modify database which satisfy differential privacy principles. in this section we will introduce the randomized aggregatable privacy-preserving ordinal response (rappor) mechanism. the next sub-section will discuss more about rappor. 2.2 randomized aggregatable privacy-preserving ordinal response (rappor) let a database consists of several attributes which each of them can be divided into several categories. for example, we can categorize people according to their genders (male/female), age range ( ) and many more. then for each attribute, each user is represented as the category he/she belongs to. to represent in which category a user belongs to, we can also use as a binary vector with exactly one 1 and 0 otherwise, where position of the 1 denotes the category he/she belongs to. these binary vector representations will then be modified randomly based on a probability distribution and sent to other parties. thus they will receive an already modified database. to learn about distribution of categories for each attribute, they have to take the aggregate values of each category. because of this, later we will call them as data aggregator. the data aggregator does not know the actual distribution of categories, but he/she may know the probability distribution that is used to modify the database. international journal of applied sciences and smart technologies volume 1, issue 2, pages 85–100 p-issn 2655-8564, e-issn 2685-9432 90 however, this knowledge should not be enough to leak actual information of the entire database. there are several kind of rappor mechanisms, as presented in wang et.al. [3]. in this article we will discuss two kind of rappor mechanisms, which are 1. rappor with direct representation this kind of data aggregation mechanism works as follows let there are m categories in an attribute and a user belongs to category . after modification, user i belongs to category . probability that user i still in his/her actual category ( = ) is γ and for each category j where probability of user i belongs to category j after modification is ( ). 2. rappor with unary representation. in this mechanism, category of a user i is represented as a binary vector  1 2, , ,i i i imx x x x where if user i belongs to category j and otherwise . then this binary vector will be modified by adding noise independently on each bit. here a bit 0 can be flipped to 1 or vice versa. for each bit, probability of binary flip from 0 to 1 is and probability of binary flip from 1 to 0 is . if , then it is called as symmetric scheme. the modified vector is then denoted as and this will be sent to the aggregator. note that after modification, it is possible to have more than one 1s or no 1s at all. in next sections, we will not discuss the privacy and accuracy aspects using definition 2.1. and definition 2.2., but we will use a different approach instead, that is, by using concepts from information theory and we will see how it could work. 3 re-defining privacy leakage in term of shannon entropy there are some open problems from the concepts of differential privacy explained in the previous section. for example, in a differentially private scheme, we want to determine the values of ε and δ such that its privacy can be considered as good enough and the values of α, β and γ such that it has good accuracy and/or utility. we are also international journal of applied sciences and smart technologies volume 1, issue 2, pages 85–100 p-issn 2655-8564, e-issn 2685-9432 91 interested in the practical interpretation of those parameters in a specific mechanism and how changes of one or two parameters affect the others. it is difficult to answer those questions since we do not have a well-defined measurements of some parameters in differential privacy. thus we might need another way of measuring the strength of privacy and accuracy. in wang et.al. [4], an idea that linked differential privacy and mutual-information privacy was introduced. therefore it should be possible to learn differential privacy using information theoretic approach. in this section we will use similar idea to re-define some aspects of differential privacy in the language of information theory. 3.1 shannon entropy and mutual information intuitively, stronger privacy will imply worse accuracy and vice versa. as a consequence, we cannot have both aspects at each highest level and we should try to find a solution for “optimizing” both privacy and accuracy. therefore their measurements have to be “sensibly comparable”. in this section an alternative definition for privacy aspect in differential privacy based on information theory point of view will be introduced. some basic definitions in information theory, based on cover and thomas [5], will be revisited first. definition 3.1. let x be a random variable with probability distribution p and probability mass function  prxp x x  . shannon entropy of x, denoted by h(x), is defined as :     1 log log x x x x x h x e p p p              . (2) binary entropy function h of an event with probability p is defined as :     1 1 log 1 log 1 h p p p p p                 . (3) in some books, shannon entropy is often called just by the word “entropy”. there are many interpretations of shannon entropy. one of them is the number of binary (yes/no) questions which need to be asked in order to learn an output if the probability international journal of applied sciences and smart technologies volume 1, issue 2, pages 85–100 p-issn 2655-8564, e-issn 2685-9432 92 distribution is known. this interpretation might be not totally accurate, but it is sensible enough to define privacy aspect. if the aggregator needs to ask too many questions in order to learn about an individual data, then we can say that the privacy is strong enough. after being modified, a database might still give some partial information about its actual data. by learning an already modified database, an aggregator might be able to leak some actual information without knowing the original one. this “leakage” can be represented as mutual information between an original database and its modified version. the following is the definition of mutual information. definition 3.2. let x and y be two random probability distributions. mutual information between x and y, denoted by i(x;y), can be computed using these equivalent formulas:                       ;i x y h x h x y h y h y x h x h y h xy h xy h x y h y x           (4) moreover, if x and y have joint probability distribution pxy and their respective marginal probability mass functions are x xyy p p and y xyxp p , then :   , ; log xy xy x y x y p i x y p p p           (5) based on shannon entropy and mutual information as in definition 3.1. and 3.2., we can create new definitions of privacy aspect of differential privacy. these will be discussed in next sub-section. 3.2 alternative definitions of privacy leakage if we go to our implementation, then we can directly get an idea to define a privacy leakage. suppose that c is the actual distribution of an attribute in a database and after modification the distribution becomes the privacy leakage can be defined as mutual information between and , i.e. ( ) which can be interpreted as the number of binary questions asked by an aggregator to learn information about actual distribution c that can be answered by his/her knowledge on modified distribution . by this international journal of applied sciences and smart technologies volume 1, issue 2, pages 85–100 p-issn 2655-8564, e-issn 2685-9432 93 interpretation, a stronger privacy scheme should have smaller value of mutual information between actual distribution and modified distribution. meanwhile, defining accuracy and utility are trickier. if we want to make everything well-defined, then we need a sensible interpretation about utility in term of the number of binary questions. this is for making a comparable measure between privacy and accuracy-utility. if one entropy in privacy has different interpretation with one entropy in accuracy/utility, then these measures are incomparable and we will not get what we expect in the beginning. this can be an open problem for any possible further research. in next section we are going to do some simulations using the matlab software to justify whether our definitions of privacy leakage and utility really make sense or not. 4 simulation using the matlab software alternative definition of privacy leakage introduced in previous section look make sense, but sometimes we need to justify them using some simulations in real and practical cases. here we do simulations on privacy leakage first. since computation of a big enough database would take long enough time to compute, we start with some special cases in small database which their computations do not take much time to complete. 4.1 case i : rappor with direct representation recall the mechanism of rappor with direct representation introduced in section 2. in this mechanism, a user which belongs to a category will have probability to stay in his/her actual category and probability ( ) to move into each of other categories, where m denotes the number of categories. therefore if we know the actual distribution c, we can compute the entropy of conditional probability distribution as below       1 1 ' 1 , , , 1 log 1 log 1 1 1 1 m h c c h m m m m                                             1 1 1 1 1 log log 1 log 1 log log log 1 1 1 m m                              international journal of applied sciences and smart technologies volume 1, issue 2, pages 85–100 p-issn 2655-8564, e-issn 2685-9432 94    log 1h m     (6) now we want to compute entropy of modified distribution . let the actual distribution be  1 2, , , mp p p . we will determine the probability that a user would end up in category j, no matter what his/her actual category is. let denote that probability as if a user is originally in category j (with probability ), then his/her probability to stay in category j is ( ) if he/she is originally in another category i (with probability ), then his/her probability to move to category j is ( ) ( ) taking sum of these disjoint cases, we get a formula of that is       1 1 1 1 1 1 1 1 1 m j j i j j j i i j m r p p p p p m m m m                                             (7) thus distribution of c’ is  1 2, , , mr r r , and mutual information between c and c’ is           1 1 '; ' ' log log 1 m j i j i c c h c h c c r h m r            (8) where rj is as defined in (7). this is the measurement of privacy leakage in this mechanism with probability parameter γ. after obtaining formula (8), we can try to do a computation of it. to simplify the computation, we try on a special case where the actual distribution is uniform, i.e. the users are distributed uniformly into m categories with probability 1/m each. in this case we will have : 1 1 1 1 1 1 1 1 1 1 1 j j m m r p m m m m m m m m m                                                1 1 1 '; log log 1 1 log log 1 log log 1 m j i j m i i c c r h m r m h m m h m m                            (9) now we compute formula (9) of variable γ. we consider several cases with different number of categories 2, 3, 4 and 5 categories. results of these computations are shown in figure 1. international journal of applied sciences and smart technologies volume 1, issue 2, pages 85–100 p-issn 2655-8564, e-issn 2685-9432 95 figure 1. privacy leakage of rappor with direct representation. graphs of mutual information ( )as a function of the noise parameter γ are plotted for m = 2, 3, 4 and 5, where the actual distribution c is uniform. we can see the behavior of those graph. when γ is closer to 0.5, the mutual information is closer to 0 and therefore get stronger privacy. we also see that if there are more categories, the value of mutual information is also bigger. however, we have not been able to compare multiple cases with different number of categories. look at the fact that formula (9) depends on the value of m and if m is bigger, then ( ) shall be bigger too. this leads to a possible kind of “normalization”, which makes the value of mutual information fall in interval if , the “normalized” mutual information should be equal to 1, which means that the aggregator is fully able to learn any information in the database since he/she receives the original one. unfortunately we are yet to find a formulation about the normalization factor. 4.2 case ii : rappor with unary representation now we move on to another case of rappor with unary representation. to simplify the case, we will consider the symmetric case when (to avoid many subscripts, later they are both written as ). each user can only belong to one category and international journal of applied sciences and smart technologies volume 1, issue 2, pages 85–100 p-issn 2655-8564, e-issn 2685-9432 96 therefore his/her binary vector representation c contains exactly one 1 in his/her category’s position and 0 otherwise. given an arbitrary binary m-vector z, we will compute the probability that the binary vector representation will be modified into z. this shall depend on how many 1s are contained in z, i.e. the hamming weight of z, usually denoted by ( ) if the 1 in original vector c is not flipped to 0, then from 0s in c, there are ( ) of them which are flipped into 1 and the other m-w(z) 0s are not flipped. its probability will be       11 1 m w zw z      . in other side, if the 1 in c is flipped to 0, then from m-1 0s in c, there are w(z) of them which are flipped into 1 and the other ( ) 0s are not flipped. its probability will be       11 1 m w zw z      . as a result, the probability distribution z of a modified database, knowing that a user belongs to category can be written as :       2 111 pr 1 jz m w zw z z z c j                  (10) it looks like a tricky task to compute the entropy h(z|c) since we have to take sum from any possible binary vectors z with various hamming weights and positions of their 1s. however, by using some binomial properties in rosen [6], we can obtain a pretty simple result below :       1 1 log 1 log 1 h z c m h m               (11) defining for any m-binary vector, is a lot more difficult. we have to consider any possible original position of the single 1, multiply it by its actual probability and take sum of them. based on (10) we can compute that probability mass function, denoted by ( ) as :           1 2 11 1 pr pr 1 1 j m j j z m m w zw z j j q z z z p z z c j p                           international journal of applied sciences and smart technologies volume 1, issue 2, pages 85–100 p-issn 2655-8564, e-issn 2685-9432 97     2 1 1 1 1 1 jw z zm m j j p                       (12) note that values of are either 0 or 1, so we can divide the last sigma form in (12) into two cases when and when . if we define   j jjp z p z  , then (12) can be simplified into               2 0 1 1 1 1 0 2 1 1 1 1 1 1 1 1 1 j j w z m m m j j j j z z w z m q z p p p z p z                                                                           1 2 1 2 1 1 1 w z m p z                     (13) since entropy calculation involves logarithm and nothing can be simplified from logarithm of a sum, it will be difficult to simplify the ( ) term in this case. therefore we try to do a “brute force” for calculating mutual information ( ) ( ) – ( ) by directly using the last row of (13) in calculating ( ) term. again we set the actual distribution of categories to be uniform and we compute multiple cases of different number of categories : 2, 3, 4 and 5 categories. results of these computations are shown in figure 2. international journal of applied sciences and smart technologies volume 1, issue 2, pages 85–100 p-issn 2655-8564, e-issn 2685-9432 98 figure 2. privacy leakage of rappor with unary representation. graphs of mutual information ( )as a function of the noise parameter β are plotted for and , where the actual distribution c is uniform. we see similar behavior with previous case. for any number of categories, their graphs are monotonically decreasing on interval the difference is that all graphs tend to 0 when . we can interpret this as the aggregator is unable to learn anything when , i.e. the binary flip is totally random. more categories also imply bigger value of mutual information, but they are also yet to be normalized. also note that if the range of β is extended to then those graph will be monotonically increasing. let us imagine if , then all binary vectors will be completely flipped (0 to 1 or vice versa) and the aggregator can easily determine the original ones. we can also intuitively conclude that cases when and – are practically similar. apart from those two presented cases, we have tried to do computation for other mechanisms, but some of them have very complicated formula and be very difficult to compute. some computations even need several days to be completed. computation for a big enough number of categories is also yet to be done. there are two possible international journal of applied sciences and smart technologies volume 1, issue 2, pages 85–100 p-issn 2655-8564, e-issn 2685-9432 99 solutions simplifying the computation, or determining upper/lower bound of the privacy leakage which is easier to compute. 5 conclusions from what are discussed in this article, we have several points of conclusions and feedbacks for any possible further research, which are : 1. by interpreting entropies as number of binary questions which are need to asked for learning information on a database, it is possible to re-define privacy and accuracy-utility aspects of a differential privacy scheme in term of entropies. in this article the former has been done. 2. privacy leakage of a differentially private mechanism can be defined as the mutual information between actual distribution of categories of an attribute in a database and its modified version. this definition does make sense and some simulations with matlab have been done to justify it. 3. defining accuracy and utility of a differentially private mechanism in term of entropies is a trickier task to do. one entropy in the definition of utility should have similar and comparable interpretation with one entropy in privacy leakage. this may still be very open problem. 4. definitions of privacy leakage here is still lack of “normalization”. to make it totally comparable between any attributes with various number of categories, we might need to normalize them into a specific range (possibly ) and their normalization factors are yet to be determined. these could be some open problems to solve in further research. acknowledgements this is a part of our work in the “bridging the gap between theory and practice in data privacy” (bridge) project at technische universiteit eindhoven in 2017, funded by the netherlands organization for scientific research (nwo) and national science foundation (nsf). this project should lead to a phd degree in which initially the first international journal of applied sciences and smart technologies volume 1, issue 2, pages 85–100 p-issn 2655-8564, e-issn 2685-9432 100 author was the phd candidate. however, after several months the first author decided not to continue working on this project. references [1] c. dwork, f. mcsherry, k. nissim, and a. smith, “calibrating noise to sensitivity in private data analysis,” theory of cryptography conference, new york, usa, 265 284, 2006. [2] c. dwork and a. roth, “the algorithmic foundations of differential privacy,” foundations and trends in theoretical computer science, 9 (3 4), 211 407, 2014. [3] t. wang, j. blocki, n. li, and s. jha, “locally differentially private protocols for frequency estimation,” usenix security symposium, vancouver, british columbia, canada, 729 745, 2017. [4] w. wang, l. ying, and j. zhang, “on the relation between identifiability, differential privacy and mutual information privacy,” ieee transactions on information theory, 62 (9), 5018 5029, 2016. [5] t. m. cover and j. a. thomas, elements of information theory, second ed, john wiley & sons publication, hoboken, usa, 2006. [6] k. h. rosen, discrete mathematics and its applications, seventh ed, mc grawhill education, new york, usa, 2011. international journal of applied sciences and smart technologies volume 5, issue 1, pages 113-132 p-issn 2655-8564, e-issn 2685-9432 this work is licensed under a creative commons attribution 4.0 international license 113 input power measurement system for driving motor in testing low-speed generator ignasius eko yuliyanto1, tjendro1, bernadeta wuri harini1, martanto1,* 1department of electrical engineering, sanata dharma university, yogyakarta, indonesia *corresponding author: martanto@usd.ac.id (received 28-04-2023; revised 08-05-2023; accepted 11-05-2023) abstract rapid technological advances are affecting the greater use of electrical energy. one of the devices that can generate electrical energy is a generator. testing the characteristics of the generator required a drive motor to rotate the generator shaft. this research aims to create a three-phase input power measurement system for driving a motor. the method of measuring input power is by measuring the current and voltage of each phase. the power is obtained from the multiplication between current and voltage. the system consists of current sensors, voltage sensors, a signal conditioning circuit, and an arduino mega microcontroller for data processing. the system is equipped with a graphical user interface, data storage, and application. the generator input power measurement system has been created and tested. the measurement system has successfully measured the input power of the generator's driving motor, which in real-time is displayed on the trend graph via the graphical user interface on the laptop. the input power measurement data on the three-phase generator and the time data have been successfully stored inside the micro-sd. the average error of the voltage reading is 2% compared to the measurement of the reference voltmeter. the current reading error was 2% compared to the reference meter ampere measurement. keywords: driving motor, generator, microcontroller, power measurement 1 introduction one tool to generate electrical energy is a generator. electric generators work by converting mechanical energy into electrical energy [1]. one type of generator is a permanent magnet generator which is an electric machine that utilizes mechanical energy to produce electricity. this generator utilizes permanent magnets as the rotor[2]. lowspeed permanent magnet generators are generally used to convert the mechanical power output of water turbines [3] and wind turbines [4][5] into electricity. the rotational speed of the generator needed to produce electricity is a minimum of 1500 rotations per minute http://creativecommons.org/licenses/by/4.0/ international journal of applied sciences and smart technologies volume 5, issue 1, pages 113-132 p-issn 2655-8564, e-issn 2685-9432 114 (rpm) [6]. this study used a generator with a rotation of less than 1500 rpm. testing the characteristics of the generator can be done by moving the generator shaft using a driving motor. the generator drive motor requires a tool that can provide power with adjustable frequency and voltage to regulate the rotation of the generator shaft. one of the driving motors that can be adjusted rotational speed is a 3-phase induction motor. so, we need a 3-phase speed driver whose voltage and frequency can be adjusted. to find out the efficiency of the generator, it can be done by measuring the electric power output of the generator and the input power of the driving motor. therefore, it is necessary to measure the three-phase input power with varying voltages and frequencies. the input electric power of the generator driving motor can be obtained by measuring the motor input voltage and current. previously, there was research related to measuring power on generators using voltage sensors and current sensors. the power measurement has be done in research entitled “rancang bangun sistem proteksi daya listrik menggunakan sensor arus dan tegangan berbasis arduino “[7]. research on the “rancang bangun sistem monitoring tegangan, arus, dan frekuensi keluaran generator 3 fasa pada modul mini power plant departemen teknik instrumentasi”[8] is concerned with monitoring the output voltage, current, and frequency of a 3-phase generator. in this study, zmpt101b and acs712 sensors were used to obtain voltage and current values. in this study, there is also monitoring of the output frequency on the generator. from the sensors used to get the values of voltage, current, and output frequency of the generator, the values from these sensors are then displayed on the lcd and loaded in the openlogger module. in this research, the method used to measure 3-phase voltage and current is only one phase, so it only uses one voltage sensor and one current sensor. the research entitled “perancangan sistem monitoring dan proteksi daya balik untuk generator 1kw 3 fasa”[9], monitoring measurements using the zmpt101b voltage sensor and acs712-20a current sensor. this monitoring is used as an early indication to prevent damage to the generator. in this research, 3 voltage sensors and 3 current sensors are used to sense each phase. from the sensor, the data is processed to produce a power value. the calculation of the power used is active power. the system is equipped with a monitoring display using a tft lcd. in addition, the system can control international journal of applied sciences and smart technologies volume 5, issue 1, pages 113-132 p-issn 2655-8564, e-issn 2685-9432 115 the reverse power protection that occurs. in this system, there is no data storage in memory so the previous data cannot be seen. the measurement system is carried out for power sources with a fixed frequency. the power measurement system mentioned above is carried out for a power source with a fixed frequency. in testing the low-speed permanent magnet generator, it is necessary to drive a variable rotational speed from a 3-phase power converter with varying frequencies. therefore this research was made to make a 3-phase electric power meter with an electric power source with varying frequencies originating from the 3-phase converter output used to activate the driving motor. the calculation of the input power to the generator driving motor is obtained by multiplying the voltage and current values for each converter output phase. therefore a voltage sensor and a current sensor are needed to get the value of the voltage and current. the voltage sensor will sense the voltage by placing the sensors in parallel on each phase. while the current sensor will measure the current by placing the sensor in series on each phase against a given load. the voltage sensor is used to measure the phase voltage. the current sensor is used to measure the phase current. voltage and current measurements are carried out for the three phases, requiring 3 voltage sensors and 3 current sensors. the processing of the signal from the voltage sensor and current sensor is processed using a microcontroller. the results of voltage, current, and power data are displayed digitally on the lcd and through the gui in graphical form and can be stored on a micro-sd and can be communicated with other microcontrollers via serial communication. 2 methods at this stage, it is designed to measure the electric power originating from the 3phase converter which is input to the generator drive motor used in testing the low-speed permanent magnet generator. in this study, the generator used was a low-speed permanent magnet generator with a maximum rotational mechanical rotation of 1395 rpm (rotations per minute). the stage starts with designing the appropriate system block. the next step is to design and implement hardware and software according to the system block. the next stage is to test the system for data collection to see the suitability of the results of the tool with the design, which is then analyzed to get conclusions. international journal of applied sciences and smart technologies volume 5, issue 1, pages 113-132 p-issn 2655-8564, e-issn 2685-9432 116 the input power measurement system for the generator drive motor is part of a system that has several measurements in testing the low-speed permanent magnet generator. this research is focused on measuring the input electric power of the generator drive motor. electric power is defined as work per unit of time or electrical energy dissipation per unit of time. the unit of power is the watt or joules per second. measurement of dc power can be done by measuring current and measuring voltage. there are two measurement configurations as shown in fig. 1 with the symbol e being the voltage source, a being the measured current, and v being the voltage. the electric power p is the product of the current and voltage values. figure 1. configuration of power measuring [10] ac power measurement can be done like dc power measurement, namely through current and voltage measurements. the difference is that for ac voltage, the current and voltage can have different waveforms and different phases. there are several definitions of ac power measurement. instantaneous power, namely the multiplication of the instantaneous voltage and instantaneous current flowing at the load, is expressed by (1). )()()( t itvtp = (1) in an ac circuit with an ac voltage source that is periodic with a period t, the average power or active power (p) is defined as stated in (2). active power is the average power consumed by the load[11]. dttp t p )( 1 = (2) in an ac circuit with a resistive load, the instantaneous power value is expressed in (3), where v is the rms (root-mean-square) voltage, i is the rms current, and ω is the corner frequency of the power source. active power is a multiplication between v and i. )2cos1()( tvitp −= (3) international journal of applied sciences and smart technologies volume 5, issue 1, pages 113-132 p-issn 2655-8564, e-issn 2685-9432 117 for a purely reactive load, the instantaneous power value is expressed as in (4). the active power for a reactive load is zero. tvitp 2cos)( = (4) for loads with resistive and reactive components, there is a phase difference between the voltage wave and the current wave which is expressed by the angle φ. the active power p is expressed by (5), where vl is the rms voltage at the load, il is the rms current of the load, and cosφ is called the power factor. the multiplication between vl and il is called the apparent power (pa) or apparent power (s), as shown in (6). reactive power (q) is defined as the multiplication between pa and sinφ value as shown in (7). cos ll ivp = (5) lla ivp = (6) sin ll ivq = (7) the mathematical relationship between the types of power that exists, namely active power, reactive power, and apparent power uses the principle of trigonometry, as shown by the power triangle in fig. 2. the relationship between apparent power s, active power p, and with reactive power q is shown by (8) 22 qps += (8) figure 2. power triangle. the input power measurement system in this research is equipped with an lcd viewer and graphic display on a computer, data recording on a micro-sd, and there is an rs485 serial communication facility to communicate with other microcontrollers. fig. 3 shows a block diagram of the power measurement system design. international journal of applied sciences and smart technologies volume 5, issue 1, pages 113-132 p-issn 2655-8564, e-issn 2685-9432 118 figure 3. block diagram of the system a. hardware design this input power meter consists of three voltage sensors, three current sensors, an rtc module, a push button, a microcontroller, an lcd viewer, a micro-sd memory module, gui on a laptop, and an rs-485 communication module. voltage and current measurements are carried out at the output of a 1-phase to 3-phase converter with a load in the form of a generator driving motor. the microcontroller used is arduino mega 2560. the data obtained from the measurement results will be displayed on a 16x2 lcd, the gui is in the form of a trend graph, and the measurement data is stored in a micro-sd. based on the system box diagram in fig. 3, then the wiring between the components or modules used is designed. the wiring diagram is shown in fig. 4. the output of each voltage sensor and the current sensor is connected to the microcontroller's analog input pin. the 16x2 cd module is used to display the results of measuring the power of each phase and the total power. the ds1302 rtc module is used as a real-time reference for recording data stored on the memory module (micro-sd). the rs485 module is used for communication with other microcontrollers for data collection purposes. international journal of applied sciences and smart technologies volume 5, issue 1, pages 113-132 p-issn 2655-8564, e-issn 2685-9432 119 figure 4. wiring diagram the voltage sensor will measure the voltage of each converter output phase, designed to measure voltages up to 250 volts rms. the voltage sensor used is shown in fig. 5. the voltage sensor circuit consists of three step-down transformers with a ratio of primary to secondary turns of 220:12. each transformer’s secondary output is connected to a signal conditioning circuit (ps). the signal conditioning circuit consists of a diode bridge, some resistors, and a capacitor. the series of resistors and capacitors function as a filter so that the output voltage of the circuit is even (dc). the characteristics of the sensor are obtained by conducting trials by measuring the converter voltage and the output dc voltage of the sensor, then carrying out the calibration process. measurements are made for varying converter voltages. the output of each voltage sensor is connected to the microcontroller analog pin. figure 5. the circuit of the voltage sensor international journal of applied sciences and smart technologies volume 5, issue 1, pages 113-132 p-issn 2655-8564, e-issn 2685-9432 120 three current sensors that use acs712-05 which are placed on each phase. the current sensor is designed to measure up to 3.5 a (rms), or a peak current of 4.949 a. when the sensor detects 0 amperes, the current sensor output will be 2.5 volts. according to the sensitivity of the acs712-05 sensor, for every 1-ampere increase, the current sensor output will increase by 0.185 volts. when the current is positive at 4.949a, the output voltage of the current sensor will be the same as vout=2.5+(0.185×4.9497) =3.415v. meanwhile, when the current is negative of -4.949a, the sensor output voltage becomes vout = 2.5 + (0.185 × -4.949) = 1.554v. this value is used as the maximum output voltage value from the current sensor to be processed by the microcontroller. b. software design the working description of the input power measurement system is using three voltage sensors to get the voltage value of each phase, and three current sensors to get the current value of each phase. the voltage value and current value will be processed by the microcontroller to get the power value of each phase. the program first starts with initializing the port used on the arduino mega 2560. the port used in the design consists of modules such as rtc, microsd memory module, lcd, push button, and rs485 module as the communication. in addition, there is an analog port initialization that is used to read voltage sensor data and current sensors. when the push button is pressed, the system will start working. the system works by taking voltage and current data for each phase. retrieval of voltage and current data on each phase using sampling with a certain period. the current and voltage measured by the current and voltage sensors are averaged using the moving average method to remove noise[12]. the flowchart of the measurement system can be seen in fig. 6. international journal of applied sciences and smart technologies volume 5, issue 1, pages 113-132 p-issn 2655-8564, e-issn 2685-9432 121 (a) (b) figure 6. flowchart of measurement system (a). voltage measurement (b). current measurement the voltage signal is in the form of a dc wave, while the current signal is in the form of a sinusoidal wave. the data is taken gradually starting from the voltage and then the current. for data collection, the voltage will be sampled every 400us with a total of 100 data so it takes about 40ms in one cycle. for data collection, the current will be sampled every 1.5ms with a total of 250 data, so it takes about 375ms in one cycle. the data collection process is carried out alternately, the first is the voltage on the r phase, the second is the voltage on the s phase, and the third is the voltage on the t phase. the voltage data will use the calibration equation using a true-rms multimeter, so the data obtained is the processing of rms voltage data. the calibrated voltage of each phase is calculated using (9) – (11). international journal of applied sciences and smart technologies volume 5, issue 1, pages 113-132 p-issn 2655-8564, e-issn 2685-9432 122 𝑉𝑅_𝑝ℎ𝑎𝑠𝑒 = (81.425 × 𝑉𝑎𝑣𝑔 − 97.778) (9) 𝑉𝑆_𝑝ℎ𝑎𝑠𝑒 = (71.653 × 𝑉𝑎𝑣𝑔 − 67.291) (10) 𝑉𝑇_𝑝ℎ𝑎𝑠𝑒 = (88.972 × 𝑉𝑎𝑣𝑔 − 112.96) (11) the calculation of the current value can be seen in (12) -14). 𝑖𝑅_𝑝ℎ𝑎𝑠𝑒 = (0.0275 × 𝑖𝑟𝑚𝑠 − 0.0146) (12) 𝑖𝑆_𝑝ℎ𝑎𝑠𝑒 = (0.0264 × 𝑖𝑟𝑚𝑠 + 0.0034) (13) 𝑖𝑇_𝑝ℎ𝑎𝑠𝑒 = (0.0274 × 𝑖𝑟𝑚𝑠 − 0.0105) (14) the rms data stored in a variable will be processed into a per-phase power value. after the per-phase power value is obtained, then the data will be processed by the microcontroller to obtain the total power value. all data processed, both per-phase input data and total input power data, will then be stored on the microsd and the data will be displayed on the lcd. 3 results and discussions the results of the design of the tool that has been made are shown in fig. 7. the system that has been made is shown at the bottom left which is assembled with a 3-phase converter, a 3-phase induction motor as the drive, and the permanent magnet generator being tested. the generator load is not shown in this figure. figure 7. result of the design the procedure for using the tool is as follows: first, when the system is supplied with power, a description of the measuring instrument and a description of the date, month and year will be displayed on the lcd. then the lcd will display a description of the condition of the sd card storage. the push button is used to activate and stop the measurement process. as an indicator, a description will appear on the lcd. when the international journal of applied sciences and smart technologies volume 5, issue 1, pages 113-132 p-issn 2655-8564, e-issn 2685-9432 123 push button is pressed for the first time it will activate the measurement system. when the push button is pressed a second time it will stop the measurement system. the description of the measurement results will be displayed on the lcd directly. the memory module will automatically store data after the user activates the measurement system. the measurement data results will be stored in real-time with the timing data obtained via the rtc ds1302. rtc ds1302 is used for recording the time, day, and date of measurement. data is stored in micro-sd memory with the .csv extension format. this is to make it easier to process data. the stored data will be delimited by commas so that the data obtained is easy to classify. the process of storing data in the sd card begins when the system is ready to start measuring. when the system detects the sd card, the system will be ready to start measuring. data storage will take place every 1 second starting after the measurement system starts. when the measurement system stops or is inactive, the process of saving data to the sd card will stop. power measurement data can be seen on the 16x2 lcd, as shown in fig. 8. after the push button is pressed, the system will start measuring power. the lcd displays r phase power (pr), s phase power (ps), t phase power (pt), and total power (pt). in addition, voltage measurement data, current measurement data, and power calculation results can be monitored through the 'serial monitor' contained in the arduino software. figure 8. lcd of power per phase and total power. testing of the measurement system is carried out by connecting the hardware (results of the measuring instrument design) to the converter and the driving motor as the load. as a reference measuring device for current measurement, 3 multimeters are used as shown in fig. 9, multimeter numbers 1 to 3. meanwhile, multimeter numbers 4 to 5 are used as a reference measuring instrument for measuring voltage. as a load from the generator, several lamps are used which can be varied. in addition to variations in the load international journal of applied sciences and smart technologies volume 5, issue 1, pages 113-132 p-issn 2655-8564, e-issn 2685-9432 124 on the generator, tests were also carried out using a variety of converter indicators. the converter indicator is a description of the numbers printed on the converter lcd. the converter indicator is a sinusoidal frequency setting to adjust the rotational speed of the driving motor. the variation used is from 3 hz to 13 hz. figure 9. multimeter as a reference for measuring voltage and current a. voltage meter testing the voltage measurement test uses a generator load of six lamps of 25 watts each and a variation of the converter indicator. the test is carried out alternately the first voltage sensor will measure the voltage on the r phase, the second voltage sensor will measure the voltage on the s phase, and the third voltage sensor will measure the voltage on the t phase. this voltage sensor test is carried out after going through the calibration process. the test results are shown in table 1 for measuring the r phase voltage, table 2 for measuring the s phase voltage, and table 3 for measuring the t phase voltage. based on table 1, table 2, and table 3, it can be seen that the value of the voltage measurement on the sensor is not much different from the reference multimeter value. the average error on the first sensor test for phase r is 1.77% with reading ability. the average error on the second sensor test for phase s is 1.08%. the average error in the third sensor test for phase t is 0.61%. the results of the three sensors show that the three voltage sensors can work well, which have an error of less than 2%, for measurements using a reference multimeter. international journal of applied sciences and smart technologies volume 5, issue 1, pages 113-132 p-issn 2655-8564, e-issn 2685-9432 125 table 1. data of r-phase voltage sensor no voltage (v) error (%) converter indicator (hz) voltage sensor voltage sensor 1 67,63 68,90 1,84 3 2 72,68 72,40 0,39 4 3 79,15 77,20 2,53 5 4 84,26 81,90 2,88 6 5 89,16 86,50 3,08 7 6 93,27 90,80 2,72 8 7 97,31 94,90 2,54 9 8 101,37 99,00 2,39 10 9 104,45 103,20 1,21 11 10 107,90 106,40 1,41 12 11 111,28 110,40 0,80 13 12 114,10 114,00 0,09 14 13 117,11 117,50 0,33 15 average 95,36 94,08 1,71 table 2. data of s-phase voltage sensor no voltage (v) error (%) converter indicator (hz) voltage sensor multimeter reference 1 65,17 66,30 1,70 3 2 70,55 72,00 2,01 4 3 76,35 77,20 1,10 5 4 81,45 81,40 0,06 6 5 86,25 86,00 0,29 7 6 90,27 90,20 0,08 8 7 94,19 94,40 0,22 9 8 97,75 98,40 0,66 10 9 101,32 102,00 0,67 11 10 104,80 105,80 0,95 12 11 108,28 109,90 1,47 13 12 111,04 113,50 2,17 14 13 114,07 117,20 2,67 15 average 92,42 93,41 1,08 international journal of applied sciences and smart technologies volume 5, issue 1, pages 113-132 p-issn 2655-8564, e-issn 2685-9432 126 table 3. data of t-phase voltage sensor no voltage (v) error (%) converter indicator (hz) voltage sensor voltage sensor 1 66,44 67,20 1,13 3 2 71,90 72,50 0,83 4 3 77,65 77,80 0,19 5 4 83,05 82,30 0,91 6 5 87,94 87,10 0,96 7 6 92,33 91,30 1,13 8 7 96,45 95,50 0,99 9 8 100,11 99,40 0,71 10 9 104,03 103,50 0,51 11 10 107,41 107,40 0,01 12 11 111,34 111,20 0,13 13 12 115,15 115,10 0,04 14 13 118,30 118,80 0,42 15 average 94,78 94,55 0,61 b. current meter testing before being used for testing in a measuring system, the current sensor has gone through a calibration process. the calibration process is carried out by measuring current with a voltage source using a tdgc2-0.5kva ac variable transformer with a load of several lamps that can be varied, as well as measuring using a reference multimeter. this calibration process is to determine the actual characteristics of the current sensor. each sensor has different characteristics. calibration process by taking data with variations in load and voltage. voltage variations are taken for every 25 v to 125 v increase. meanwhile, load variations are up to four lamps with 100 watts for each lamp. this calibration process is carried out on the three sensors used. the data taken is then searched for the linearity equation. current meter testing is carried out by applying a lamp load to the generator. the light load used is 6 lamps of 25 watts each and a variation of the converter indicator. the test is carried out by alternating current sensor 1 will measure the current in phase r, current sensor 2 will measure the current in phase s, and current sensor 3 will measure international journal of applied sciences and smart technologies volume 5, issue 1, pages 113-132 p-issn 2655-8564, e-issn 2685-9432 127 the current in phase t. the test results of the current sensor are shown in table 4 for measuring phase current r, table 5 for the measurement of the s-phase current, and table 6 for the measurement of the t-phase current. table 4. data of r-phase current sensor no current (a) error (%) converter indicator (hz) current sensor multimeter reference 1 1,23 1,25 1,60 3 2 1,25 1,26 0,79 4 3 1,31 1,31 0,00 5 4 1,36 1,35 0,74 6 5 1,36 1,36 0,00 7 6 1,37 1,36 0,74 8 7 1,34 1,33 0,75 9 8 1,29 1,31 1,53 10 9 1,29 1,28 0,78 11 10 1,26 1,26 0,00 12 11 1,25 1,24 0,81 13 12 1,22 1,22 0,00 14 13 1,19 1,20 0,83 15 average 1,29 1,29 0,66 table 5. data of s-phase current sensor no current (a) error (%) converter indicator (hz) current sensor multimeter reference 1 1,19 1,19 0,00 3 2 1,25 1,25 0,00 4 3 1,33 1,32 0,76 5 4 1,36 1,36 0,00 6 5 1,39 1,39 0,00 7 6 1,37 1,38 0,72 8 7 1,35 1,36 0,74 9 8 1,32 1,33 0,75 10 9 1,31 1,31 0,00 11 10 1,28 1,28 0,00 12 11 1,25 1,26 0,79 13 12 1,24 1,24 0,00 14 13 1,22 1,22 0,00 15 rerata 1,30 1,30 0,29 international journal of applied sciences and smart technologies volume 5, issue 1, pages 113-132 p-issn 2655-8564, e-issn 2685-9432 128 table 6. data of t-phase current sensor no current (a) error (%) converter indicator (hz) current sensor multimeter reference 1 1,25 1,25 0,00 3 2 1,29 1,29 0,00 4 3 1,34 1,34 0,00 5 4 1,38 1,38 0,00 6 5 1,39 1,40 0,71 7 6 1,38 1,38 0,00 8 7 1,36 1,36 0,00 9 8 1,33 1,34 0,75 10 9 1,31 1,32 0,76 11 10 1,30 1,29 0,78 12 11 1,27 1,27 0,00 13 12 1,24 1,25 0,80 14 13 1,23 1,23 0,00 15 average 1,31 1,32 0,29 in tables 3, 4, and 5, it can be seen that the value of the current measurement on the sensor is not much different from the value shown by the multimeter. the average error in testing sensor 1 in phase r is 0.66%. the average error in sensor 2 testing in phase s is 0.29%. the average error on the 3-sensors test in the s phase is 0.29%. the results of the three sensors show that the three current measurements can work properly, with an error of less than 1% compared to the reference multimeter. c. power meter testing the power measurement results are obtained when measuring voltage and current. the power measurement data is obtained from recording on the serial monitor from the arduino software. tests are carried out using variations when the generator is without load and also when the generator is loaded. the generator when loaded consists of 1 lamp load to 6 lamp loads. in addition, the tests carried out have a variety of indicators from the converter with an indicator range of 3 to 13. the measurement results for a generator load of 6 lamps are shown in figs 10-12. the 'measure' curve is the result of a power measurement processed by the microcontroller. while the 'calculate' curve is the power international journal of applied sciences and smart technologies volume 5, issue 1, pages 113-132 p-issn 2655-8564, e-issn 2685-9432 129 value obtained from multiplying the voltage and current by the multimeter measurement results. with each increase in the frequency of the converter, the power will increase, indicating that the generator power will increase with an increase in generator shaft rotation. based on the test results data used to make the graph, the error value has been calculated between the measurement results and the calculation results. the average error of the r phase power is 2%, the average s phase error is 1.26% and the average t phase error is 0.75%. the average error for all tests is 1.4%. based on the average error value which is less than 2%, it can be said that the power measurement system (va) can work properly. figure 10. results of measuring the r phase power and calculating the power based on a multimeter with variations in the frequency converter. figure 11. results of s-phase power measurements and power calculations based on a multimeter with frequency converter variations. international journal of applied sciences and smart technologies volume 5, issue 1, pages 113-132 p-issn 2655-8564, e-issn 2685-9432 130 figure 12. results of measuring phase t power and calculating power based on a multimeter with variations in frequency converters d. communication with gui and other microcontrollers after the device is connected to a computer/laptop, the user can activate the device by pressing the push button, data will be sent using serial communication. data from the serial is then read in a gui application program created using python which is then plotted as a trend graph in the gui. the gui display can be seen in fig. 13. the power value of each phase and the total power can be displayed numerically and graphically. the horizontal axis shows the measurement time in seconds, and the vertical axis is the power measured in va units. one window is displayed for every 10 data points, when the curve reaches the right boundary of the graph, then the next time curve will be displayed starting from the left again. figure 13. the results of displaying graphical trends in the gui application. communication between the system and another microcontroller (master) is designed using the rs-485 serial communication module. the data communication process will take place when the measuring system is active. if the measurement system international journal of applied sciences and smart technologies volume 5, issue 1, pages 113-132 p-issn 2655-8564, e-issn 2685-9432 131 is not active there will be no communication. the communication process uses the rs485 module through two lanes a and b. this communication will start when the master sends data in the form of an encoded address desired by the master. each measurement system on a low-speed permanent magnet generator has its own mutually agreed-upon coding address. for input power measurement coding address is #i$. when the slave gets data that matches the coding address, the slave will immediately send measurement data. the format of data transmission carried out by the slave is as follows "#i, total power, $". fig. x is the result of communication between two slave and master microcontrollers. 4 conclusions based on the design and implementation as well as testing of the input power measurement system for the driving motor, it can be concluded as follows. the measurement system has succeeded in measuring the voltage, current, and apparent power of each phase, as well as the total input power of the generator drive motor. real-time measurements are monitored and displayed on an lcd viewer and graphs via the gui on the laptop. instantaneous measurement data can also be displayed on the lcd. the voltage and current sensors used can be used to measure the input power per phase of the generator with a value close to the multimeter reference. input power measurement data and time data have been successfully stored in the micro sd. there is an average voltage reading error of 2% against the reference voltmeter. the current reading error is 1% against the reference ammeter. the average power measurement error is 2% against a reference multimeter. acknowledgements acknowledgments are addressed to lppm universitas sanata dharma for supporting this research. references [1] m. h. rashid, electric renewable energy system. london: elsevier ltd, (2016). [2] t. yee heng, t. jian ding, c. choe wei chang, t. jian ping, h. choon yian, and m. dahari, permanent magnet synchronous generator design optimization for international journal of applied sciences and smart technologies volume 5, issue 1, pages 113-132 p-issn 2655-8564, e-issn 2685-9432 132 wind energy conversion system: a review, energy reports, 8 (2022) 277–282. [3] k. i. liangliang wei, taketsune nakamura, development and optimization of lowspeed and high-efficiency permanent magnet generator for micro hydro-electrical generation system, renew. energy, 147 (1) (2020) 1653–1662. [4] h. q. and p. j. wang fengxiang, bai jianlong, design features of low speed permanent magnet generator direct driven by wind turbine, in international conference on electrical machines and systems, nanjing, china, (2005) 1017– 1020. [5] s. d. zevalukito, y. lukiyanto, and f. r. prayogo, the experiment of wind electric water pumping for salt farmers in remote area of demak-indonesia, int. j. appl. sci. smart technol., 4 (2) (2022) 185–194. [6] p. ristianto, generator ganda pada pembangkit listrik mikrohidro dengan turbin tunggal, avitec, 1 (1) (2019) 65–70. [7] syafruddin, g. devira ramady, and r. ristiadi hudaya, rancang bangun sistem proteksi daya listrik menggunakan sensor arus dan tegangan berbasis arduino, isu teknol. stt mandala, 16 (1) (2021) 36–43. [8] f. adi iskandarianto et al., rancang bangun sistem monitoring tegangan, arus, dan frekuensi keluaran generator 3 fasa pada modul mini power plant departemen teknik instrumentasi, j. amori, 1 (2020). [9] i. m. a. n. and s. m. u. azmi, perancangan sistem monitoring dan proteksi daya balik untuk generator 1 kw 3 fase. [10] w. s., teknik ukur dan peranti ukur elektronik. jakarta, gramedia, (1988). [11] von m. a., electric power system. canada: john wiley & sons, inc., (2006). [12] bernadeta wuri harini, martanto, and tjendro, comparison of two dc motor speed observers on sensorless speed control systems, j. nas. tek. elektro dan teknol. inf., 11 (4) (2022) 267–273. international journal of applied sciences and smart technologies international journal of applied sciences and smart technologies volume 1, issue 2, pages 147–168 p-issn 2655-8564, e-issn 2685-9432 147 morphological map analysis in design cashew sheller (kacip) as a creative process to produce design concept bertha bintari wahyujati department of mechatronics product design, politeknik mekatronika sanata dharma, yogyakarta, indonesia corresponding author: bertha.bintariwahyujati@gmail.com (received 29-05-2019; revised 17-10-2019; accepted 17-10-2019) abstract the design of cashew nut or cashew nut sheller uses appropriate or low technoshelly with consideration of low cost for tool material. this pengkacip tool will be used at ngudi koyo, imogiri, bantul, yogyakarta. cashew shell peeler or cippling device as a result of the design is a modification of the existing cashew shell peeler. some parts of the existing tool are applied to several modified parts, namely the lever mechanism, picking knife, or lever knife. this paper will discuss the method of selecting a suppressor, lever and picking system on a tool using the morphological chart analysis method. morphological charts will produce alternative designs for cashew nut peeler. the selection of alternative designs will be carried out by analyzing the results of testing in a technical mechanism, material strength, and alternative design quality values. testing of alternative technical systems mechanisms is done by comparing the mechanical systems of existing tools. the size of the tool uses the anthropometric measurements of the female operator's body, because the operators in the ngudi koyo ukm are all women. the tool size adjustment international journal of applied sciences and smart technologies volume 1, issue 2, pages 147–168 p-issn 2655-8564, e-issn 2685-9432 148 will provide to work more comfortable and increase efficiency. quality testing in addition which is using standard anthropometric standards, will be tested for quality of ease to maintenance, ease to mobility, cleanability, neat, simple and safety tool. keywords: effective technoshelly, low technoshelly tool, security, design alternative testing 1 introduction ukm ngudi koyo production system is a production that depends on supply and orders. production cannot be carried out continuously because the supply of cashew nuts is not always in same quantity and quality, also because cashews only can be harvesting in october per year. these unstabil quantities cause the production have to be flexible in production system. production depends on the order, availability of material and availability of time of the worker. the character of this production system causes production equipment not always to be operate. equipment that not be operated will be stored in the ukm's production house, so the equipment needs are concise, do not require a big space and portable [1]. the production operators at ngudi koyo ukm are all farmers, so the cashew nut production business is a side business. as a side business, the production of cashew nuts is expected to be done on the free time when all farming work is completed done. the workers who are all woman work on the process of stripping the shell, stripping the epidermis and packing the cashews. workmanship is often taken home because of the flexible work time and workmanship that can be taken to work other jobs at home. therefore, a sheller that has easy to move places becomes a very important. flexibility also needed beside of the need of construction strength tools, including the possibility of working using this tool using their own table, easy to operate, and can be used at any time. tools that have the capacity in accordance with the capabilities of each worker can be owned as a personal tool. with the limitations of the specifications as needed for the cashew nut sheller, the design method that will be applied will be the analyzing method of morphological chart. international journal of applied sciences and smart technologies volume 1, issue 2, pages 147–168 p-issn 2655-8564, e-issn 2685-9432 149 morphological charts will facilitate selection of components in each part of the tool. thus, after getting alternative designs, it will be analyzed to determine the best alternative choice for the design to be improved. the cashew nut sheller is expected to be used for small-scale production, empowering operators, improving the value in production of cashew nuts at ukm ngudi koyo. 2 technical data and specifications of cashew nut spindles cashew nut is a fruit from cashew tree. cashew plant with the latin name anacaridium occidentale l is a plant that lives in dry areas, and has little rainfall. the physical cashew fruit is a fake fruit as an enlargement of the fruit stalk. the fruit which is called as cashew nuts, located at the tip of the fruit. cashew trees are filling plants between fields and villages of the villagers. one cashew tree can produce an average of about 1 quintal of cashew. cashew trees only have one harvest cycle season; in august, cashew trees start flowering, the process until the fruit ready for harvest is in october. harvest cashew fruits are intended to harvest cashew nuts. the cashew nuts then will be dried under the sun rays for about one week. drying the nuts intended for storing raw materials in production houses. cashew nuts in perfect dried cashew can be stored for last more than a year. the production process is usually started before the holidays based on orders. the storage process will be important before the cashew through the next process, which is the separation of nuts from the shells using kacip. international journal of applied sciences and smart technologies volume 1, issue 2, pages 147–168 p-issn 2655-8564, e-issn 2685-9432 150 figure 1. cashew and its parts [2] cashew shells have hard skin called the pericarp consisting of three layers, namely: the epicarp layer, the mesocarp layer, and the endocarp layer. epicarp is the outermost skin, the outermost layer that has hard and tough properties. mesocarp is the middle layer which has the thickest layer of the three layers of skin. in the mesocarp layer there are conduits that drain liquid cnsl (cashew nut shell liquid) which are sticky and toxic. this liquid could irritate skin, and is toxic to be eaten. endocarp is a soft inner layer [2]. these all are illustrated in figure 1, where a shows cashew fruit, it is also a fruit stem, b shows cashew’s nut, and c shows cashew shell and its parts: 1. nut skin, 2. epicarp, 3. mesocarp, 4. endocarp, 5. epidermis, and 6. cashew nut. data of cashew shells measurements will be used to design the sheller include the average size of cashew shells, the depth of the skin that can be penetrated by the blade without breaking the cashew nuts, the speed of knife pressure, and the blade pressure to the cashew nut also the direction which is most effective to get the cashew nuts. look at the table 1 below: table 1. results of measurement and weighing of weights of cashew nuts [2] criteria a b c d e f length (mm) width (mm) thickness (mm) weight (gr) international journal of applied sciences and smart technologies volume 1, issue 2, pages 147–168 p-issn 2655-8564, e-issn 2685-9432 151 figure 2. position of measurement cashew shells [3] the data is used to determine the optimal size for cashew nut clamp. the optimal size must be able to accommodate each size, so it must be considered a factor of flexibility to clamp different sizes of cashew. the use of a system presses towards the outer shell of the log to penetrate the maximum layer of the mesocarp, or the middle layer which is porous and contains liquid cnsl (cashew nut shell liquid). the data needed is about the average depth of the gelindong from the outermost layer of the skin to the middle layer, to determine the maximum depth of the knife piercing the cashew nut. the depth of the blade will determine the force used to press the piercing blade. from the data it was found that the average cashew size was divided into 6 criteria for cashew nuts. the size of the clamp will be selected flexible clamp that can accommodate 6 types of cashew size. from the compressive velocity data and the average depth of the knife, a speed of can be obtained which produces a blade depth with a compressive force . this compression force will determine the type of spring and spring material to be used. in table 2 and table 3 we can see the speed of emphasis of cashew nut blades and relationship between the compressive force charged and size cashew nut. we can also see the postition of compressing cashew logs in figure 2. the data used to determine the optimal size for cashew nut clamp. the optimal size must be able to accommodate each size, so it must be considered the factor of flexibility to clamp different sizes of cashew. the use of a international journal of applied sciences and smart technologies volume 1, issue 2, pages 147–168 p-issn 2655-8564, e-issn 2685-9432 152 system presses towards the outer shell of the shell to penetrate the maximum layer of the mesocarp, or the middle layer which is porous and contains liquid cnsl (cashew nut shell liquid). the average depth of the shell from the outermost layer of the skin to the middle layer, to determine the maximum depth of the knife piercing the cashew nut. the depth of the blade will determine the force used to press the piercing blade. the cashew size was divided into 6 criteria for cashew nuts. the size of the clamp will be selected which can be flexible that can accommodate 6 types of cashew size. from the compressive velocity data and the average depth of the knife, a speed of can be obtained which produces a blade depth with a compressive force . this compression force will determine the type of spring and spring material to be used. look at the table 2 and table 3 below: table 2. speed of emphasis for the depth of cashew nut blades [2] table 3. relationship between the compressive force charged (kgf) to the cashew nut and the size changes that occur [2] figure 3. position of compressing cashew logs [2] press speed (m / sec) average knife depth (mm) press force no maximum load average compressive force (kgf) reduced size in shape (mm) 1. pressure on the thick side 2 pressure on the long side 3. pressure on width international journal of applied sciences and smart technologies volume 1, issue 2, pages 147–168 p-issn 2655-8564, e-issn 2685-9432 153 before get pressure from the knife, position of the cashew nut should be in the middle of the cashew nuts. it found in this position, the cashew are not too deformed due to pressure. deformation of a pressure that is too large will cause the cashews be splitted because of knife pressure when piercing the shell. the position to get the optimal pressure and be able to open the shell is position b. the advantages of this position is because the shape of the cashew nut which has an inner basin in its shape. to open the shell without made nut be splitted, the tip of the blade must be shaped as a triangle to pierce the part of the cashew basin. some positions of compressing cashew logs are shown in figure 3. the force of pressing the knife into the cashew nut will determine the shape of the blade tip chosen. the position of pressing the knife to pierce the cashew nut, and the position of the cashew nut against the blade horizontally or vertically will determine the amount of pressure on the blade given. the position of the cashew nut from its shape determines the size of the blade. the depth of the blade is determined by the maximum depth until the middle layer so that the blade shape should be tapered at the end. after the blade pierces the cashew nut then the blade is tilted as a lever end to separate the cashew nuts from the shell. figure 4. maximum depth of blade the slope of the blade is to leverage at a minimum of 15-20 degrees. the magnitude of the lever angle will affect the leverage or torque applied to the blade. illustrations are given in figure 4, where details are presented in table 4. international journal of applied sciences and smart technologies volume 1, issue 2, pages 147–168 p-issn 2655-8564, e-issn 2685-9432 154 table 4. relationship between torque magnitude and torsional angle to release cashew nuts from the skin [2] size angle of suppressor (degree) average maximum torque (kgf.cm) the production process at ngudi koyo ukm can not finish in one day process. work carried out separately in each employee's house. the production process will be handling as separately works. worker do the job at home to peeling the spindles, and the skin of the cashews. within 1 day, workers are usually able to peel cashew nuts as much as 7 kg of cashew nuts. the salary received by workers is . cashew which has been peeled dried to make easier to stripping of epidermis. cashews will be preparate as raw cashew nuts and fried cashews. cashew nuts for raw orders will be dried before ready to be packaged. while the order of fried cashew nuts processed in frying process. raw cashew nuts will be divided into 3 quality criteria for cashew nuts based on customer orders, there are whole cashew nuts and split cashew nuts which will be packaged separated will be sell at price . meanwhile, crushed cashew nuts are sold at a price between . the sale of mixed cashew nuts consists of whole cashew nuts, split cashews and crushed cashew nuts sold at prices ranging from to . cashew nut sheller is called kacip. kacip is a tool with a knife modified with a wooden frame, measuring 30 centimeters long. this kacip wood tool consists of a knife that has a wood and pressing knife blade. cashew shell is placed under the blade inside the nut holder, the cashew nut is held with the thumb and forefinger. the position of the other fingers holds the cashew nut that has not been peeled. stripping of the cashew nuts is by putting cashew nuts one by one. stripping must be carefully because raw cashew has liquid sap which is very irritating. the selling price of whole cashew as good quality is higher, the stripping of cashew nut should not splitted into two parts. the cashew nut shelling equipment in recent time only owned by a few workers, because not everyone has the expertise to peel the cashew nut with good quality result. international journal of applied sciences and smart technologies volume 1, issue 2, pages 147–168 p-issn 2655-8564, e-issn 2685-9432 155 3 morphological chart analysis method the method of designing the cashew nut sheller is to mapping the part of the important part of the sheller based on the reference of the existing tool. the method of analyzing morphological maps is carried out by step-determining the technical criteria and mechanisms, material criteria, strength criteria, quality criteria. these criteria make it easy to set and sort the appropriate components. the level of criteria used for the whole tool is divided into parts. the parts of the tool are as follows: buffer construction parts, lever parts, knife parts, tool holder parts and cashew nuts. all parts of the tool are then given alternatives to then be combined into a concept of alternative tools. the alternative concept is then described thoroughly. of all alternatives will be analyzed using an analysis of the criteria of technical specifications and mechanisms, the strength of the material and the quality determined based on the criteria of the specifications that are according to the needs of the tool. 4 technical criteria and mechanisms technical analysis is a functional and technical analysis related to the mechanism for the peeler system. in addition, technical analysis covers the analysis of the structure construction of the constructor of the peeler [4]. 4.1 technical specifications the existing cashew sheller is called kacip. the tool is made of one blade with a slit in the side of the knife. laying cashew logs one by one held with fingers without safety. incorrect pressing of the knife causes the skin not to be peeled or the cashew split into two. therefore not everyone can peel the cashew using kacip. for the design of cashew nut sheller, use the kacip reference. tools modified from frames that were previously made of wood, replaced with metal. the blade is replaced by using stainless steel. stainless steel blade chosed because the material safe to use for food and food ingredients. the price of this material is more expensive than iron material. the knife is designed as a flat plate with the tip of a pointed triangle blade with an angle 15-20 degree. the type of knife edge we can see on figure 5. international journal of applied sciences and smart technologies volume 1, issue 2, pages 147–168 p-issn 2655-8564, e-issn 2685-9432 156 figure 5. type of knife edge the lever system and presses on the blade use hinges that are retained with a spring. the place for laying cashew nuts uses a clamp system as a position guard. placement of cashew remains one by one using a clamp that can be moved towards the blade. difficulty in placement if done in large volumes poured because the shape of the cashew logs is not uniform, the size is different. the position of the cashew nut to be peeled must be on the back of the cashew nut. the directional lever system perpendicular to the direction of stabbing is intended to leverage the cashew seeds out of their shells. lever knives use a maximum torque of 20 so that there is a minimum formed angle of 15 degrees. mohamad saldin wibowo (ipb, 2011) examined the size of local indonesian cashews and obtained three lengths of cashew nut cashew fruit. the size is to determine the size of the blade to be used on the cashew nut peeler [5]. the blade is determined from the size of the cashew log data, namely long blade for piercing large cashew logs , long blade for piercing medium sized cashew nuts , long blade for piercing small size cashew nuts . the three types of knife sizes are taken; the average size is determining the size of the blade allows accommodating the piercing function of the three criteria for the size of the cashew logs. 4.2. morphological chart the method of morphological maps requires comparison of tools from existing tools, judging by their weaknesses and strengths so that they can be used as new references and innovations [6]. innovations in this regard cover modifications. we can see this at table 5 international journal of applied sciences and smart technologies volume 1, issue 2, pages 147–168 p-issn 2655-8564, e-issn 2685-9432 157 table 5. analysis of existing tools tool type tool's name kacip (wooden sheller) kacip flat plate kacip ripper clamps the blade on the whole kacip is made to match with this complete the cashew coil is operational the natural shape of the cashew nut so that the blade only splits the skin several millimeters thick as the skin is cut. after the skin is split, the cashew seeds are removed using a knife or flat nail kacip, 8 kg of cashew nuts are obtained per person per day (one day 8 hours work) with a capacity of 70% whole seeds and 30% fractions consisting of halves, fragments, groats and dust placed in the clamp gap then pressed like a hand-held motion working deficiency cashew shell are placed one by one and held by hand, maybe the cashew is split very high, need trained people, cashew must still be gouged with other tools from the shell of the shell cashew shell are still placed one by one on a jagged base, still not safe cashew sponges are still placed one by one, the pressure must be careful, the possibility of lettuce and nuts split. the analysis of work methods or functional analysis of the tools observed above, the lack of tools to find the best solution for modification so that it will be more optimal. the optimal standard results that are referenced are fewer split cashew nuts. in the functional analysis found similarities in the working principle, namely the first step is to stab cashew nuts and then pick out the cashew seeds. the design of the cashew nut peeler consists of the following sections international journal of applied sciences and smart technologies volume 1, issue 2, pages 147–168 p-issn 2655-8564, e-issn 2685-9432 158 a. supporting framework the supporting frame is a frame that serves to support the entire cashew shelling unit. the supporting frame also serves to resist the forces that occur due to the transmission of force and the weight of the load. b. lever handle the lever is an arm for channeling the pressure force and then leveres on the blade to stab and gouge the cashew nuts. c. spring the spring can reduce vibrations due to the movement of the lever when stabbing and gouging the cashew nut shell. the position of the handle will return to its original position by using a spring after the lever is moved. the spring can be used to regulate and control the compressive force so that the distance of the blade press can be controlled to adjust to the cashew log posture to be peeled. the spring also functions to reduce the pressure applied by the operator. d. knife this knife functions when the blade pierces the shell of the cashew nut and to bring it up to release the cashew nuts from the cashew seeds. the edge of the paring knife has a blade that has a 15 o tip and a smooth angle so that it can pierce the cashew nut. the tip of the pointed blade will be more in line with the size of the different cashew nuts, because it can pierce according to its posture. the tip of the pointed blade accommodates the shape of the cashew nut overdraft. this is so that the blade can pierce the skin at the desired position and depth so that the cashew seeds remain intact. 4.3. ergonomic analysis working comfort, the ease of operating tools and security when operating tools is an important criterias. working comfort is determined by the compatibility between work position and tool size. ease of operation in terms of the mechanism of the tool and ease of use, it is not complicated, easy to learn and use energy as lightly as possible. while the safety of work is viewed from safety against accidents caused by device errors, operator accident and equipment damage. international journal of applied sciences and smart technologies volume 1, issue 2, pages 147–168 p-issn 2655-8564, e-issn 2685-9432 159 ergonomics as one of the considerations for the design criteria for the design of the peeler is determined from the comfort side of the operator working while sitting, rarely hands when reaching, holding, rotating and security aspects against the danger of cutting [7]. the anthropometric data used with the data subjects were women aged 20 years 47 years. the measured anthropometry is the female operator who works for the ngudi koyo ukm. from the measurements obtained anthropometric data as follows in table 6 and figure 6 : table 6. anthropometric data of female workers [5] no. body dimension 1 eye height 2 shoulder height 3 height of eye in sitting position 4 shoulder height in a sitting position 5 elbow height in a sitting position 6 thigh thick 7 distance from the buttocks to the knees 8 distance from folding knees to buttocks 9 knee height 10 knee height 11 pelvic width 12 distance from elbow to fingertip 13 front hand grip distance figure 6. normal arm reach and farthest arm reach with calculations using body size from the data above, it is determined that the pengacip lever handle a. size height handle = height of chair + size of elbow height in sitting position + allowance dimensions: elbow height in sitting position, percentile international journal of applied sciences and smart technologies volume 1, issue 2, pages 147–168 p-issn 2655-8564, e-issn 2685-9432 160 allowance: calculation: (measured from the floor to the maximum height of the handle on the tool. the size obtained from the calculation is the normal height of the elbow in the sitting position plus the seat height. the maximum angle for the normal position of the arm is . the calculation used percentile or average value so that all operators can adjust properly. b. the shortest distance for the handle range of the operator dimension: distance from elbow to fingertip calculation: . the shortest distance of the handle range of the operator must not be smaller than the length of the forearm.in order to be accommodated by all operators, a percentile is used as the minimum size c. the farthest distance from the operator the farthest distance from handle legth of fore shoulder height in a sitting position elbow in a sitting dimension shoulder height at sitting position = 515 mm (average value) elbow height at sitting position distance from elbow to finger tip female finger length percentile calculation the maximum distance reached so that percentile 5 is used so that the operator has it smaller extreme sizes could reach. the upper arm angle with body because it is the angle for optimal attraction [8]. d. handle diameter handle diameter according to petrofsky, 1980 in sritomo w, 2000 optimal size for women is a maximum of . international journal of applied sciences and smart technologies volume 1, issue 2, pages 147–168 p-issn 2655-8564, e-issn 2685-9432 161 5 sketch design for modifying the cashew peeler the method of morphological maps is used to find alternative concepts of the cashew tool design. selection of alternatives is to choose an alternative concept that is closest to fulfilling the specifications of the tools needed. mapping parts and components of the tool based on the concept of existing tools on the market. alternative selection will also consider the criteria established as a reference for choosing alternatives. determination of the specifications of the tool is prepared from the needs of the user and identified from the problems in the process of working on the existing tool.look at table 7 below : table 7. alternative analysis, morphology chart no specifications category aspect code specifications 1. cashew seeds separate from cashew nuts d function a1 2. cashew seeds are not split d function a2 3. the safety of the fingers when peeling cashew is very important w safety b 4. power to peel as lightly as possible w ease to operate c1 5. the sheller tool considers the position of work comfort (ergonomics) d ergonomic d1 6. the weight of the pengacip tool is no more than 16 kg, as a limit to women's lifting ability w ergonomic d2 7. production tools are portable so they can be used anywhere w ease to operate c2 8. the tool is easy to maintain, durable and strong. w ease to maintenance e the latest design uses as reference tool, it is made as a morphological map to make it easier to analyze the components of the design of the tools. analysis of morphological maps can be used to provide changes and modifications to the shortcomings of the tools used as references. modification can be a total or partial change. the main need for modification is to meet the needs of users, both in the form of functions and aspects of criteria that are arranged as requirements for criteria or specifications. modification is an innovation that is applied as an alternative design concept at table 8. international journal of applied sciences and smart technologies volume 1, issue 2, pages 147–168 p-issn 2655-8564, e-issn 2685-9432 162 table 8. analysis of alternative concepts, morphological charts analyze alternative concepts from reference a. b. c. 1. lever rear spring system, lever to press on beside operational lever angle elbow range maximum hand reach from the front operational lever angle elbow range , maximum hand reach from the front operational lever angle elbow range hand reach from the front 2. pressing lever the press lever creates a thrust force in the cylinder pressing cylinder horizontally press lever creates a compressive force on the blade pressing lever vertically press lever creates a compressive force on the blade pressing lever vertically 3. blade position and cashing clamp holder the lever system uses a press hinge and rotates, the arm behind spring system 1/3 front arm lever press horizontally lever press vertically lever press vertically the clamp is in front and at the back, as a knife the blade is at the top, the bottom becomes a clamp the clamp is at the bottom, the knife is at the top international journal of applied sciences and smart technologies volume 1, issue 2, pages 147–168 p-issn 2655-8564, e-issn 2685-9432 163 laying cashew is difficult because of the position above without restraint laying is easy but not safe, hands can be hit by a knife easy laying, hands can be hit by a knife 4. blade shape 5. mechanis m the lever is pressed down then the lever will push the blade, then rotate the lever clockwise the lever is pressed down, the hand position is more than 90 o the knife is pressed down using the right lever, the position of the arm can still be conditioned less than 90 o 6. material metal tube as a peel lever and press lever plate bend combination of metal plate and metal tube 7. joint sctructure the joint uses 2 hinges in the motion joint: 1 press motion hinge and 1 rotary motion hinge for 90 o joint uses pin, spring, vibration and returns to its original position due to spring joint uses pin, spring, vibration and returns to its original position due to the spring, then moves to rotate the direction clockwise using hinges, so that the compressive force requires energy to push the blade towards the cashew skin using the front spring, the position of the lever press the position above makes it difficult for the hand to reach, but the pressure will be stronger, and more tiring using a spring on the back of the lever before the lever pin axle so that the compressive force is strong enough, the energy released is lighter, the angle of the hand is in the normal position. international journal of applied sciences and smart technologies volume 1, issue 2, pages 147–168 p-issn 2655-8564, e-issn 2685-9432 164 arranging alternative concepts alternative designs using a combination of components in the morphological map (see table 9). combinations are arranged based on the best possibilities so that they still allow the tool to function properly and meet the requirements of criteria and specifications wherever possible [6]. table 9. morphology chart notes for symbols in this table: concept of alternatif 1 concept of alternatif 2 concept of alternatif 3 no. components option 1 option 2 option 3 1. lever 2. pressing lever 3. blade position and cashing clamp holder 4. blade shape 5. mechanism 6. material 7. joints structure 78. the direction of the operator towards the tool . design new concept alternative 1 1a, 2b,3c,4c,5c,6a,7c,8b design new concept alternative 2 1b,2c, 3c,4a,5b,6b,7c,8b design new concept alternative 3 1c,2b,3a,4c,5b,6c,7b,8c 8. the direction of the operator towards the tool the operator is in front with the maximum straight range, the elbow angle is more than 90 o the operator is in front with the maximum straight range, the elbow angle is more than 90 o the operator is in front with a normal elbow 7b 6a 7c 7c 8b 8b 5b 5b 4c 4c 8c 5c 6c 3c 3c 2b 2b 1b 6c 3a 1c 2c 4a 1a international journal of applied sciences and smart technologies volume 1, issue 2, pages 147–168 p-issn 2655-8564, e-issn 2685-9432 165 6 results and discussions of design concepts the result of this study can be seen in table 10 below: table 10. analysis of alternative design sketches no alternative component combination design sketch specification achievement 1. design new concept alternative 1 1a, 2b,3c,4c,5c,6a,7c,8b the concept of this tool uses two springs. the working system presses down with a spring, then rotates using a lever used to press the spring. lever is in front of the operator a1,a2,c1, d2,c2 5 criteria are met from 8 specification requirements 2. design new concept alternative 2 1b,2c, 3c,4a,5b,6b,7c,8b the concept of this tool uses one spring. the working system presses down with a spring, and then rotates using another lever on the blade body. lever is in front of the operator. a1,a2,c1, d2,c2,e 6 criteria are met from 8 specification requirements 3. design new concept alternative 3 1c,2b,3a,4c,5b,6c,7b,8c the concept of this tool uses one spring. the working system presses down with a spring, then rotates using a press lever the lever is the operator's farthest range from the front, but allows the operator to operate from the right side of the tool a1,a2,b,c 1,d2,c2,e 7 criteria are met from 8 specification requirements 7 conclusions analysis of alternative sketch drawings alternative design concepts, is weighted which is judged by the fulfillment of the requirements of the criteria or specifications. the greater value indicates a tendency towards the selection of alternative design concepts. improvements and improvements must still be made to the chosen design concept. in selected designs the most comfortable position is in front of the operator but normal hand reach with an angle should not exceed 90o. thus the design change is international journal of applied sciences and smart technologies volume 1, issue 2, pages 147–168 p-issn 2655-8564, e-issn 2685-9432 166 done by changing the handle direction to the side so that the handle range can be carried out in the normal position in figure 7. figure 7. the draft concept was chosen as a peeler with a change in handle direction the design of the cashew nut peeler is operated manually. the design of the tool size uses the anthropometric measurements of the female operator's body. improvements in the comfort of work, namely by working in a sitting position, while the tool is placed on the work table. with the facing position from the front, at least the hands are better protected from the danger of being cut by a knife. the lever is located at the normal elbow position, so that the range meets ergonomic work comfort standards. range of press levers. the way to operate this tool is to use the right hand to move the press lever, then the press lever is moved clockwise to help leverage the cashew seeds out of the cashew nut. the tool is protected with a cover that ensures safety, ease of cleaning and maintenance of the tool. the weight of the tool is estimated by choosing the right material so that the weight of the tool becomes lighter for the improvement of the design the next opportunity is the flexibility of the tool so that the tool can be adjusted to the work table and the seating position of all operators comfortably. with the concept of this tool design, the tool is expected to be easy to use and safe so that female workers at ngudi koyo ukm can work more effectively and efficiently. international journal of applied sciences and smart technologies volume 1, issue 2, pages 147–168 p-issn 2655-8564, e-issn 2685-9432 167 references [1] badan pusat statistik, kabupaten bantul dalam angka, badan pusat statistik kabupaten bantul, 2018. [2] awaludin, dace. modifikasi dan uji performansi alat pengupas kulit buah mete, fakultas teknologi pertanian, institut pertanian bogor, bogor, 1995. [3] direktorat jendral perkebunan, pedoman pelaksanaan pengembangan jambu mete, departemen pertanian, jakarta,1979. [4] n. cross, ”engineering design methods”, john wiley & sons, chichester, 2008. [5] m. s. wibowo, modifikasi dan uji performansi alat pengupas kulit buah mete gelondong, departemen teknik mesin dan biosistem, fakultas teknologi pertanian, institut pertanian bogor, bogor, 2011. [6] e. lutters, f. j.a.m. van houten, a. bernard, e. mermoz, c. s. l. schutte, “tools and techniques for product design,” cirp annals, 63 (2), 607 630,2014. [7] s. ramadhan, haniza, “ergonomic facility design on station cv putra darma sorting,” journal of industrial and manufacturing engineering, 1 (1), 46 55, 2017. [8] s. wigjosoebroto and sutaji, “analisa dan redesain stasiun kerja operasi tenun secara ergonomi untuk meningkatkan produktivitas,” seminar nasional ergonomi, teknik industri fti-its, surabaya. 2000. international journal of applied sciences and smart technologies volume 1, issue 2, pages 147–168 p-issn 2655-8564, e-issn 2685-9432 168 this page intentionally left blank international journal of applied sciences and smart technologies international journal of applied sciences and smart technologies volume 2, issue 2, pages 137–156 p-issn 2655-8564, e-issn 2685-9432 137 binary logistic regression modeling on net income of pagar alam coffee farmers ngudiantoro 1 , irmeilyana 1,* , mukhlizar nirwan samsuri 1 1 department of mathematics, faculty of mathematics and natural sciences, sriwijaya university, indralaya, sumatera selatan, indonesia * corresponding author: irmeilyana@unsri.ac.id (received 10-07-2020; revised 18-08-2020; accepted 18-08-2020) abstract pagar alam coffee is a besemah coffee originating from the smallholder plantation in south sumatra, indonesia. the majority of pagar alam coffee farming is a hereditary business. coffee farmers' income is very dependent on coffee production, production costs, and coffee prices. this study aims to obtain a probability model of pagar alam coffee farmers income based on the factors that influence it. the independent variables studied were the number of dependents, economic conditions, number of trees, age of trees, frequency of fertilizer used, frequency of pesticide used, production at harvest time, production outside harvest time, number of women workers outside the family, minimum price of coffee, maximum price of coffee, farmers' gross income, and land productivity. modeling used binary logistic regression method on 179 respondents. there were three methods used, i.e. enter method, forward and backward methods. the model using enter method results the greatest prediction accuracy which is 87.7%. the factors that have a significant influence on the net income of pagar alam coffee farmers are gross income, land productivity, and the number of women international journal of applied sciences and smart technologies volume 2, issue 2, pages 137–156 p-issn 2655-8564, e-issn 2685-9432 138 workers from outside the family. the most influential variable is gross income. keywords: net income, gross income, pagar alam coffee farmers, binary logistic regression 1 introduction coffee is one of the mainstay export commodities of indonesian plantation. based on export value in 2017, indonesia is among the 10 largest coffee exporting countries in the world. this can be seen from the results of the selection of leading export commodities (winning commodities) using several analytical methods, namely computable general equilibrium (cge) and export product dynamics (epd) (data source: indonesia eximbank institute and unied, 2019 in [1]). indonesia is one of the fourth largest coffee producers in the world with an output of 6.84% of world coffee production. the provinces that contributed the most to indonesia's coffee production were south sumatra, lampung, north sumatra, bengkulu, and aceh. in the wto series webinars and trade policy analysis on june 8-14, 2020, one of the speakers dedi budiman hakim said that in 2017, the number of smallholder estates was estimated to have decreased by -0.07%, while state-owned and private estates rose by 0.07% and 0.18% respectively. the volume of coffee produced by smallholder plantations in 2017 is predicted to reach 599,902 tons, and the production growth will decrease by -0.37% compared to 2016. while state and private plantations’ production volume were increased by 0.42% and 2.36% respectively. the factor that caused the decline in indonesian coffee production was due to weather factors that did not support production activities. in addition, the factor of the lack of knowledge about coffee plantations by some farmers and the high price of fertilizers has caused indonesia's coffee production to be at a maximum level. the world coffee price is projected to increase due to the increasing demand for coffee for retail coffee shop needs, which is experiencing a positive trend due to a change in coffee drinking culture. in 2019, it is projected that indonesian coffee exports to the united states and several major destination countries will again grow positively in line with the increase in domestic production. projections can be higher than expected international journal of applied sciences and smart technologies volume 2, issue 2, pages 137–156 p-issn 2655-8564, e-issn 2685-9432 139 if there is an increase in productivity of coffee plantations in indonesia, which is caused by the application of better agricultural technology. projections can be lower than expected if there is bad weather causing a decline in coffee yields, as well as an increase in domestic coffee consumption, so the amount of exported is decreasing (source: trademap.org, oxford economics, lpei in [1]). coffee plantations in pagar alam, south sumatra province (sum-sel), are smallholder plantations that have existed since the dutch colonial era. the majority of the types of coffee planted are robusta. there is a portion of land at an altitude more than 1,000 masl producing arabica coffee. according to [2], sum-sel formed a cluster that its characteristics are the highest of land area, the highest of coffee production, the highest of area of tm-pr (producing plants on smallholder plantations), the highest of robusta area, and the highest of robusta coffee production. the area of tbm (immature plant), area of tr (damaged plants), and number of farmers in sum-sel are also high. based on [3], kota pagar alam, banyuasin regency and lubuk linggau formed a cluster that was formed not have dominant characteristics. all variable values researched are low. those variables include: area, area of tm, area of tbm, and area of tr, production, average production, and number of farmers. pagar alam coffee has its own unique taste, whose enjoyment is well known as besemah coffee. this besemah coffee includes coffee which is also produced from lahat regency and empat lawang regency [4]. these three regencies are bordering each other. there are 31 variables (including 2 nominal scale variables) analyzed in the paper [5]. the correlation between each of the 29 variables with the net income variable, the value is weak even very weak, ranging from -0.135 to 0.455. only the gross income variable is strongly correlated with net income, which is 0.709. variables that were negatively correlated were very weak, there were 6 variables, namely the age of the tree, the number of women workers in the family, the number of male workers in the family, the length of harvest, and the maximum price of coffee. based on the results of the bivariate analysis with the chi-square test in [6], there were only 13 factors related to the farmers’ net income, namely: the number of international journal of applied sciences and smart technologies volume 2, issue 2, pages 137–156 p-issn 2655-8564, e-issn 2685-9432 140 dependents, number of trees, age of trees, number of female workers from outside the family, frequency of fertilizer used, frequency of herbicide used, harvest production, outside harvest production, gross income, minimum price of coffee beans, maximum price of coffee beans, economic condition, and land productivity. based on correspondence analysis in [7], the factors that have a relationship with land productivity are the area, number of trees, planting area of 1 tree, frequency of fertilization, frequency of herbicide use, harvest period, and harvest production. the purpose of this paper is to form a pagar alam coffee farmers’ net income probability model based on factors that are significantly related to net income. data that is used base on [6]. method used to determine the net income probability model is logistic regression. the estimation model in this paper can be used as a recommendation for parties from relevant agencies who need information about the factors that simultaneously influence the net income of coffee farmers in pagar alam. it can also be input for policy makers and related institutions to improve the welfare of coffee farmers through the optimization of production factors that affect farmers' incomes. logistic regression is a regression analysis that is used to describe the relationship between a dependent variable that is dichotomous (nominal or ordinal scale with 2 categories) or polychotomus (nominal or ordinal scale with more than 2 categories) and a set of predictor variables that are continuous or categorical [8]. if the dependent variable consists of two categories (binary), then the logistic regression is called binary logistic regression [9, 10]. one of application of binary logistic regression models was in [11]. the model can be used to analyse the characteristics of songket craftsmen in ogan ilir regency, so that the potential of the craftsmen and the factors that directly influence the productivity of the craftsmen can be known. factors that greatly influence productivity can be recommended for policy makers in improving the welfare of craftsmen. the same thing is also obtained from the results of the probability model on the factors that affect the income of coffee farmers. international journal of applied sciences and smart technologies volume 2, issue 2, pages 137–156 p-issn 2655-8564, e-issn 2685-9432 141 2 research methodology the subjects of this study were farmers in pagar alam, south sumatra province who run coffee farming. respondents were chosen through purposive sampling technique. the data in this paper is the result of research on [6]. the variables used in this paper are variables that are significantly related to net income variable based on the results of correspondence analysis. the method used is binary logistic regression analysis. the dependent variable is the net income of coffee farmers who are divided into 2 categories, namely 0: low net income and 1: high net income. there are 13 independent variables used, namely economic condition, number of dependents, number of coffee trees, age of trees, frequency of fertilizer used in one year, frequency of pesticides used in one year, production at harvest time (in quintals), production outside harvest time (in quintals), number of women workers outside the family, the minimum price of coffee (in rp / kg), the maximum price of coffee (in rp / kg), gross income (in rp), and land productivity ( 10 -4 kg/m 2 ). the application of binary logistics modeling is as follows: 1. conducting descriptive analysis to find out the characteristics of the independent variables (can be seen in [5]); 2. conducting a bivariate analysis to see the relationship between the independent variables with the dependent variable (that has been done on [6]); 3. estimating model parameters using the maximum likelihood method; 4. perform parameters testing simultaneously and partially; 5. choosing the best model using the enter method, backward stepwise elimination, and forward stepwise elimination methods; 6. interpreting the model and the results that have been obtained; 7. make conclusions on the results of research. modeling is done with the help of minitab 18 and spss 24 software. international journal of applied sciences and smart technologies volume 2, issue 2, pages 137–156 p-issn 2655-8564, e-issn 2685-9432 142 3 results and discussion the variables used in this paper are significantly related to the net income variable of pagar alam coffee farmers based on the results of correspondence analysis [6]. data processing used binary logistic regression modeling. dependent variable is the net income of coffee farmers divided into 2 categories, namely 0 as low net income and 1 as high net income. there are 13 independent variables used, all of which are divided into categories of variables and converted into ordinal scale. the division of each independent variable into categories based on the cut point method. these categories of independent variables and their descriptions can be seen in [2]. table 1 shows variable notation, categories number of a variable, and category of a variable that has the highest percentage respondents. table 1. defining variable notation and number of categories no. variable notation number of categories highest category (in %) 1. net income y 2, i.e. 0: low, and 1: high 1: high (63) 2. economic conditions x1 3 1 : not enough (48) 2 : enough (51) 3. number of dependents x2 5 3: 2 (36) 4. number of trees x3 5 3: (2,500; 4,000] (44) 5. age of tree x4 6 2: [10, 20] (45) 6. frequency of fertilizer used x5 6 2: 1 (46) 7. frequency of pesticides used x6 4 2: 1 (36) 8. production at harvest period x7 5 1: <1,000 (41) 2: [1,000; 2,000) (39) 9. production outside the harvest period x8 6 3: (50, 250] (41) 10. number of women workers outside the family (tkwl) x9 5 1: 0 person (79) 11. minimum price x10 1: < 19,333 (78) 12. maximum price x11 3 1: < 20,667 (63) 13. gross income x12 4 2: (10, 25] (49) 14. land productivity x13 4 2: [1,333; 2,667) (27) 3: [2,667; 5,000) (30) note: suppose xi(j) as notation for variable xi on category j. for example: x2(3) as notation for variable x2 (number of dependents) on category 3 (2 person). international journal of applied sciences and smart technologies volume 2, issue 2, pages 137–156 p-issn 2655-8564, e-issn 2685-9432 143 by defining the variables as in table 1, the data matrix of 179 respondents was processed using spss version 24. some of the outputs from data processing using the enter method are as follows in table 2. table 2. case processing summary case processing summary unweighted cases a n percent selected cases included in analysis 179 100.0 missing cases 0 .0 total 179 100.0 unselected cases 0 .0 total 179 100.0 a. if weight is in effect, see classification table for the total number of cases. based on table 2, the number of respondents analyzed was 179 people and no data were missed. table 3. dependent variable encoding dependent variable encoding original value internal value 0 0 1 1 based on table 3, the dependent variable code is 0 for low net income and 1 for high net income. table 4. classification table on beginning block classification table a,b observed predicted net income percentage correct 0 1 step 0 net income 0 0 66 .0 1 0 113 100.0 overall percentage 63.1 a. constant is included in the model. b. the cut value is .500 based on table 4, in step 0, the logistic regression model only has a constant. there are no independent variables in the equation. the accuracy of the prediction is only 63.1%. with only a constant, it is significant at in influencing the net income of coffee farmers. international journal of applied sciences and smart technologies volume 2, issue 2, pages 137–156 p-issn 2655-8564, e-issn 2685-9432 144 table 5. variables in equation and not in equation variables in the equation b s.e. wald df sig. exp(b) step 0 constant .538 .155 12.048 1 .001 1.712 variables not in the equation score df sig. step 0 variables economic conditions 5.699 1 .017 number of trees 2.018 1 .155 age of trees .001 1 .978 freq. of fertilizer 4.376 1 .036 freq. of pesticides 1.034 1 .309 production at harvest 27.913 1 .000 prod. outside harvest 7.694 1 .006 tkwl 2.970 1 .085 minimum price 3.577 1 .059 maximum price .670 1 .413 gross income 87.195 1 .000 land productivity 15.634 1 .000 number of independents 1.729 1 .189 overall statistics 94.330 13 .000 based on table 5, with the wald test at we reject h0, so there are independent variables that affect farmers' net income. table 6. block 1 in enter method omnibus tests of model coefficients chi-square df sig. step 1 step 133.668 13 .000 block 133.668 13 .000 model 133.668 13 .000 model summary step -2 log likelihood cox & snell r square nagelkerke r square 1 101.992 a .526 .719 a. estimation terminated at iteration number 7 because parameter estimates changed by less than .001. the omnibus test in table 6 presents a simultaneous test of all variable coefficients in the logistic regression model. value is a distinction of -2 log likelihood model with only constants and estimated models. significant value represents that the independent variables affect net income. the cox & snell r square value is 0.526, which means that the independent variables in the model can explain the high or low net income of a farmer by 52.6%. international journal of applied sciences and smart technologies volume 2, issue 2, pages 137–156 p-issn 2655-8564, e-issn 2685-9432 145 similarly, nagelkerke's value of 0.719 states that the independent variables in the model are able to explain the high or low net income of a farmer at 71.9%. table 7. hosmer and lemeshow test hosmer and lemeshow test step chi-square df sig. 1 7.814 8 .452 contingency table for hosmer and lemeshow test net income = 0 net income = 1 total observed expected observed expected step 1 1 17 17.814 1 .186 18 2 18 17.373 0 .627 18 3 14 13.344 4 4.656 18 4 7 6.760 11 11.240 18 5 7 5.238 11 12.762 18 6 1 3.499 17 14.501 18 7 2 1.666 16 16.334 18 8 0 .253 18 17.747 18 9 0 .039 18 17.961 18 10 0 .013 17 16.987 17 the hosmer and lemeshow test in table 7 is based on the chi-square distribution test. the value of is . with and , the value of is , so . it means there is no difference between the observations with the model. the results of the chi-square test show no significance so the predicted probabilities correspond to the observed probabilities. in this case, the model formed can be said to be appropriate. the contingency table for hosmer and lemeshow test provides information that the data is divided into 10 groups. in each step, the number of farmers with high net income is raised. for example, in the first step, out of 18 cases, there were 17 low-net income farmers and 1 high-net income farmer. in the fifth step, out of 18 cases, there were 7 cases of low-net income farmers and 11 high-net income farmers. table 8. classification table on step 1 classification table a observed predicted net income percentage correct 0 1 step 1 net income 0 48 18 72.7 1 4 109 96.5 overall percentage 87.7 a. the cut value is .500 international journal of applied sciences and smart technologies volume 2, issue 2, pages 137–156 p-issn 2655-8564, e-issn 2685-9432 146 the classification table in table 8 shows how well the model classifies net income cases into 2 categories. the overall accuracy of model prediction is 87.7%. the accuracy value of this model is obtained from the corresponding column based on predictions divided by the amount of data (or the number of respondents). this shows that the model is better than the previous model with only a constant which is only 63.1% in predicting the net income probability of pagar alam coffee farmers. while the accuracy of prediction of low and high net income farmers is respectively 72.7% and 96.5%. table 9. variables in the equation variables in the equation b s.e. wald df sig. exp(b) step 1 a economic conditions .223 .522 .182 1 .669 1.250 number of trees .501 .381 1.727 1 .189 1.651 age of trees -.154 .234 .435 1 .510 .857 freq. of fertilizer -.419 .368 1.295 1 .255 .658 freq. of pesticides .016 .281 .003 1 .955 1.016 production at harvest -.512 .432 1.401 1 .236 .600 prod. outside harvest .403 .270 2.229 1 .135 1.497 tkwl .779 .394 3.919 1 .048 2.180 minimum price .257 .730 .124 1 .724 1.294 maximum price -.259 .642 .163 1 .686 .772 gross income 4.663 .895 27.179 1 .000 105.998 land productivity .914 .400 5.214 1 .022 2.495 number of independents -.095 .232 .170 1 .680 .909 constant 11.799 2.604 20.536 1 .000 .000 a. variable(s) entered on step 1: economic conditions, number of trees, age of trees, freq. of fertilizer freq. of pesticides, freq. of pesticides, production at harvest, prod. outside harvest, tkwl, minimum price, maximum price, gross income, land productivity, number of independents. table 9 shows the wald test which is a partial test of the significance of independent variables. wald statistical value follows the chi-square distribution, so that if we see from the sig value, then for , it is found that the independent variables of the number of women workers outside the family (tkwl), gross income, and land productivity significantly influence the significance of farmers' net income. the value of exp is the odds ratio, all of which are positive. exp represents that for an increase in the independent variable by 1 unit, the ratio of the international journal of applied sciences and smart technologies volume 2, issue 2, pages 137–156 p-issn 2655-8564, e-issn 2685-9432 147 possibility of high net income to low net income also increases, by assuming the other independent variables are fixed. in the economic condition variable whose ordinal scale data (i. e. 1, 2, and 3 consecutively states that the economic condition is not enough, sufficient, and more than enough), if the economic condition increases 1 level of category, then the likelihood ratio of high net income with low net income increases with a factor of 1.25, assuming the other independent variables are fixed. likewise, if the tkwl used rises by 1 level of category (i.e. category 1 for 0 tkwl, 2 for 1 person tkwl, and so on), then the likelihood ratio of high net income farmers with low net income rises by a factor of 2.18, assuming the other independent variables are fixed. if the farmer has 1 tkwl (notated by ), then it is likely 2.18 times to increase the net income of the farmer, compared to if the farmer does not employ tkwl (notated by ). an increase in 1 category of tkwl will cause an increase in net income from farmers by 2.18 times greater for each increase in the category. for exp (or with a negative b value), it can represent if the independent variable goes up 1 level of the category, then the ratio of the possibility of high-net income farmers to low-net income will decrease by that factor, assuming the other independent variables are fixed. examples of independent variables are the age of the tree, the frequency of fertilizer used, production at harvest, the maximum price, and the number of dependents. every time there is an increase in the category of the age of trees, it is possible that the high net income of farmers will decrease. based on the significant factors affecting the net income of pagar alam coffee farmers, the binary logistic regression model that is formed is because the coefficients of these significant variables are positive, so these variables can increase the probability value of the model. calculation of the probability of the model can be done in each of the available categories. for example, we can find probability model with tkwl , gross international journal of applied sciences and smart technologies volume 2, issue 2, pages 137–156 p-issn 2655-8564, e-issn 2685-9432 148 income , and land productivity . in this case, we have , , and , so the probability as follows: then, we get . based on the value π(x) obtained, it can be seen that the probability for high net income for coffee farmers who do not employ tkwl have gross income of 10 till 25 million rupiahs, and have land productivity of is 74.04 %. the results of calculating the probability of high net income from a combination of categories available from the three variables can be seen in table 10. table 10. calculation results from the combination of the categories of three variables no . tkw-l (x9) (in person) gross income (x13) (in million) land productivit y (x12) probabili ty no . tkw-l (x9) (in person) gross income (x13) (in million) land productivity (x12) probability 1 1: 0 1: ≤ 10 1: <1333 0.0043 3: (25, 50] 1: <1333 0.9957 2: [1333, 2667) 0.0107 2: [1333, 2667) 0.9983 3: [2667, 5000) 0.0262 3: [2667, 5000) 0.9993 4: ≥5000 0.0629 4: ≥5000 0.9997 2: (10, 25] 1: <1333 0.3143 4: >50 1: <1333 1.0000 2: [1333, 2667) 0.5334 2: [1333, 2667) 1.0000 3: [2667, 5000) 0.7404 3: [2667, 5000) 1.0000 4: ≥5000 0.8767 4: ≥5000 1.0000 3: (25, 50] 1: <1333 0.9798 4 4: 3 1: ≤ 10 1: <1333 0.0429 2: [1333, 2667) 0.9918 2: [1333, 2667) 0.1005 3: [2667, 5000) 0.9967 3: [2667, 5000) 0.2179 4: ≥5000 0.9987 4: ≥5000 0.4100 4: >50 1: <1333 0.9998 2: (10, 25] 1: <1333 0.8259 2: [1333, 2667) 0.9999 2: [1333, 2667) 0.9221 3: [2667, 5000) 1.0000 3: [2667, 5000) 0.9672 international journal of applied sciences and smart technologies volume 2, issue 2, pages 137–156 p-issn 2655-8564, e-issn 2685-9432 149 4: ≥5000 1.0000 4: ≥5000 0.9866 2 2: 1 orang 1: ≤ 10 1: <1333 0.0093 3: (25, 50] 1: <1333 0.9980 2: [1333, 2667) 0.0230 2: [1333, 2667) 0.9992 3: [2667, 5000) 0.0554 3: [2667, 5000) 0.9997 4: ≥5000 0.1276 4: ≥5000 0.9999 2: (10, 25] 1: <1333 0.4998 4: >50 1: <1333 1.0000 2: [1333, 2667) 0.7136 2: [1333, 2667) 1.0000 3: [2667, 5000) 0.8614 3: [2667, 5000) 1.0000 4: ≥5000 0.9394 4: ≥5000 1.0000 3: (25, 50] 1: <1333 0.9906 5 5: ≥4 1: ≤ 10 1: <1333 0.0889 2: [1333, 2667) 0.9962 2: [1333, 2667) 0.1958 3: [2667, 5000) 0.9985 3: [2667, 5000) 0.3778 4: ≥5000 0.9994 4: ≥5000 0.6023 4: >50 1: <1333 0.9999 2: (10, 25] 1: <1333 0.9118 2: [1333, 2667) 1.0000 2: [1333, 2667) 0.9627 3: [2667, 5000) 1.0000 3: [2667, 5000) 0.9847 4: ≥5000 1.0000 4: ≥5000 0.9938 3 3: 2 1: ≤ 10 1: <1333 0.0201 3: (25, 50] 1: <1333 0.9991 2: [1333, 2667) 0.0488 2: [1333, 2667) 0.9996 3: [2667, 5000) 0.1133 3: [2667, 5000) 0.9999 4: ≥5000 0.2418 4: ≥5000 0.9999 2: (10, 25] 1: <1333 0.6852 4: >50 1: <1333 1.0000 international journal of applied sciences and smart technologies volume 2, issue 2, pages 137–156 p-issn 2655-8564, e-issn 2685-9432 150 2: [1333, 2667) 0.8445 2: [1333, 2667) 1.0000 3: [2667, 5000) 0.9312 3: [2667, 5000) 1.0000 4: ≥5000 0.9713 4: ≥5000 1.0000 3: (25, 50] 1: <1333 0.9957 based on table 10, if each of the independent variable categories is higher, then the probability value of the farmers’ net income is higher. the most influential variable is gross income. while the variable with the smallest influence on the model is tkwl. in each tkwl category, if the gross income category is 4 (notated by ), then for the land productivity in category 1 to 4 (starting from ), the value . in addition, in each tkwl category, if the gross income category is 3 (notated by ), then for the land productivity in category 1 to 4 (starting from ), the value . likewise, if the gross income category is 2 (notated by ), then for the land productivity in category 4 (notated by ), the value . in this case, the increasing net income of coffee farmers can be represented by high gross income, higher number of female workers outside the family, and high land productivity. in each tkwl category, if the gross income category is 1 (notated by ), then regardless of the land productivity category , the probability value of net income is very small. the model results if data processing uses the forward and backward methods as follows in table 11. table 11. some outputs by using the forward step method model summary step -2 log likelihood cox & snell r square nagelkerke r square 1 115.827 a .488 .667 2 110.907 a .502 .686 a. estimation terminated at iteration number 7 because parameter estimates changed by less than .001. international journal of applied sciences and smart technologies volume 2, issue 2, pages 137–156 p-issn 2655-8564, e-issn 2685-9432 151 classification table a observed predicted net income percentage correct 0 1 step 1 net income 0 44 22 66.7 1 2 111 98.2 overall percentage 86.6 step 2 net income 0 44 22 66.7 1 2 111 98.2 overall percentage 86.6 variables in the equation b s.e. wald df sig. exp(b) step 1 a gross income 4.315 .746 33.421 1 .000 74.790 constant -7.508 1.448 26.882 1 .000 .001 step 2 b tkwl .624 .336 3.457 1 .063 1.867 gross income 4.554 .812 31.436 1 .000 95.020 constant -8.818 1.765 24.947 1 .000 .000 a. variable(s) entered on step 1: gross income. b. variable(s) entered on step 2: tkwl. based on table 11, the forward step method (as many as 2 steps) results two independent variables that have a significant effect on income, namely gross income and tkwl. the overall accuracy of predictions is 86.6%. this percentage is lower than the accuracy of the model of the enter method’s result. the model results if data processing uses the backward methods as follows in table 12. table 12. some outputs by using the backward step method model summary step -2 log likelihood cox & snell r square nagelkerke r square 1 101.992 a .526 .719 … 10 106.400 a .514 .703 11 108.059 a .510 .696 a. estimation terminated at iteration number 7 because parameter estimates changed by less than .001. international journal of applied sciences and smart technologies volume 2, issue 2, pages 137–156 p-issn 2655-8564, e-issn 2685-9432 152 classification table a observed predicted net income percentage correct 0 1 step 1 net income 0 48 18 72.7 1 4 109 96.5 overall percentage 87.7 … step 10 net income 0 44 22 66.7 1 2 111 98.2 overall percentage 86.6 step 11 net income 0 44 22 66.7 1 2 111 98.2 overall percentage 86.6 a. the cut value is .500 variables in the equation b s.e. wald df sig. exp(b) step 1 a tanggungan -.095 .232 .170 1 .680 .909 economic conditions .223 .522 .182 1 .669 1.250 number of trees .501 .381 1.727 1 .189 1.651 age of trees -.154 .234 .435 1 .510 .857 freq. of fertilizer -.419 .368 1.295 1 .255 .658 freq. of pesticides .016 .281 .003 1 .955 1.016 production at harvest -.512 .432 1.401 1 .236 .600 prod. outside harvest .403 .270 2.229 1 .135 1.497 tkwl .779 .394 3.919 1 .048 2.180 minimum price .257 .730 .124 1 .724 1.294 maximum price -.259 .642 .163 1 .686 .772 gross income 4.663 .895 27.179 1 .000 105.998 land productivity .914 .400 5.214 1 .022 2.495 constant -11.799 2.604 20.536 1 .000 .000 … step 10 a prod. outside harvest .290 .230 1.585 1 .208 1.336 tkwl .675 .355 3.623 1 .057 1.965 gross income 4.411 .809 29.699 1 .000 82.375 land productivity .439 .241 3.327 1 .068 1.552 constant -10.343 2.017 26.286 1 .000 .000 step 11 a tkwl .618 .349 3.127 1 .077 1.855 gross income 4.451 .805 30.597 1 .000 85.724 land productivity .394 .236 2.787 1 .095 1.483 constant -9.522 1.832 27.018 1 .000 .000 note: the outputs from step 2 to step 9 are not all displayed based on table 12, using the backward step method (as many as 11 steps), obtained 3 independent variables that have a significant effect on net income, namely gross income, land productivity, and tkw-l. the overall accuracy of predictions is 86.6%. this percentage is the same as the accuracy of the model using the forward method, but lower than the model generated by the enter method. international journal of applied sciences and smart technologies volume 2, issue 2, pages 137–156 p-issn 2655-8564, e-issn 2685-9432 153 the following table 13 shows a recapitulation of data processing results based on 3 methods. table 13. recapitulation of results from all three methods method accuracy of model (%) significant variables model enter 87.7 x9, x12, and x13 forward 86.6 x9 and x12 backward 86.6 x9, x12, and x13 note: x9: tkwl, x12: gross income, and x13: land productivity based on table 13, the accuracy of the model resulting from the enter method is greatest. the coefficients of the independent variables on the model are also highest among three models. the resulting model contains the gross income variable which has the highest effect on net income. net income is gross income which has been reduced by production costs incurred by farmers. production costs include land management, crop maintenance, labor costs, and other costs. labor wages are usually issued for workers from outside the family, both men and women. women workers are paid for picking coffee fruit. plant maintenance includes the provision of fertilizers and weed control by herbicides. tillage also supports crop maintenance. crop maintenance costs also relate to the age of the tree and the number of trees. older trees need better maintenance, so that the roots remain sturdy and also need to be rejuvenated. the more trees, the greater the maintenance costs. plant spacing that is too tight can reduce production. in this case, the frequency of fertilizer use, frequency of pesticide use, number of trees, and age of trees variables are related to production costs. production costs are contained in the variable gross income. international journal of applied sciences and smart technologies volume 2, issue 2, pages 137–156 p-issn 2655-8564, e-issn 2685-9432 154 4 conclusion the conclusions obtained from this study are the factors that have a significant influence on the net income of pagar alam coffee farmers are gross income, land productivity, and the number of women workers from outside the family. simultaneously, the gross income ( ), land productivity ( ), and the number of women workers from outside the family ( ) variables affect net income with the probability model: all the coefficients of the variables that have a significant effect are positive, then these variables can increase the probability value of the model. if each variable category gets higher, the probability value of the farmers’ net income is higher. in each tkwl category, if the gross income category is 4 ( ) and land productivity category is starting from 1 ( ), then the value . this study does not describe variables related to land productivity. because in this study it was found that land productivity has a significant effect on the binary logistic regression model, it is necessary to examine the indirect relationship between variables that have no significant effect on the model on farmer's net income. in this case, it needs to be further analyzed by path analysis regarding the indirect effects of the number of trees, frequency of fertilizer used, frequency of pesticides used, crop production, land area, area of 1 tree, and length of time of harvest. acknowledgments the authors acknowledge the assistance and financial support of lembaga penelitian dan pengabdian kepada masyarakat (lppm) universitas sriwijaya in this research through “penelitian unggulan kompetitif universitas sriwijaya tahun 2019.” references [1] d. b. hakim, “analisis kebijakan perdagangan internasional pertanian dan manufaktur.” webinar berseri wto dan analisis kebijakan perdagangan, international journal of applied sciences and smart technologies volume 2, issue 2, pages 137–156 p-issn 2655-8564, e-issn 2685-9432 155 cooperation itabs, dept. ie fem ipb, isei bogor raya, kementerian perdagangan, and unied, 2020. [2] irmeilyana, ngudiantoro, a. desiani, and d. rodiah, “deskripsi hubungan luas areal dan produksi perkebunan kopi di indonesia menggunakan analisis bivariat dan analisis klaster.” infomedia, 4 (1), 21–27, 2019. [3] irmeilyana, ngudiantoro, a. desiani, and d. rodiah, “deskripsi hubungan luas areal dan produksi perkebunan kopi di provinsi sumatra selatan.” in prosiding semirata bks ptn indonesia barat, 74–86, 2019. [4] ___________, “kopi robusta besemah sumatera selatan.” [online]. available: http://www.lintaskopi.com/kopi-robusta-besemah-sumatera-selatan/, 2017. [5] irmeilyana, ngudiantoro, and d. rodiah, “deskripsi profil dan karakter usaha tani kopi pagar alam berdasarkan descriptive statistics dan korelasi.” infomedia, 4 (2), 60–68, 2019. [6] irmeilyana, ngudiantoro, and d. rodiah, “laporan penelitian hibah kompetitif: analisis pengaruh faktor-faktor internal dan eksternal terhadap produktivitas petani kopi dan usaha optimalisasi produksi kopi.” indralaya, 2019. [7] irmeilyana, ngudiantoro, and d. rodiah, “application of simple correspondence analysis to analyze factors that influence land productivity of pagar alam coffee farming.” in international conference on mathematics, statistics, and their applications (icmsa), 2019. [8] a. agresti, categorial data analysis. new york: john wiley & sons inc., 2009. [9] d. w. hosmer and s. lemeshow, applied logistic regression, 2nd ed. new york: john wiley & sons inc., 2000. [10] a. widarjono, analisis multivariat terapan, edisi kedua. yogyakarta: upp stim ykpn, 2015. [11]m. n. samsuri, ngudiantoro, and irmeilyana, “pemodelan regresi logistik produktivitas pengrajin songket di kabupaten ogan ilir.” in proceeding of the 5th annual reasearch seminar, 66–71, 2020. http://www.lintaskopi.com/kopi-robusta-besemah-sumatera-selatan/ international journal of applied sciences and smart technologies volume 2, issue 2, pages 137–156 p-issn 2655-8564, e-issn 2685-9432 156 this page intentionally left blank international journal of applied sciences and smart technologies international journal of applied sciences and smart technologies volume 2, issue 2, pages 111–136 p-issn 2655-8564, e-issn 2685-9432 111 hardware architecture and implementation of an ai pet robot elton lemos 1,* , abhishek ghoshal 1 , aditya aspat 1 1 department of computer engineering, xavier institute of engineering, mahim, mumbai, maharashtra, india * corresponding author: eltonlemos2411@gmail.com (received 03-08-2020; revised 17-08-2020; accepted 18-08-2020) abstract the concept of ai companionship is gaining popularity in recent times. our project is an attempt to create a robotic companion that can act just like a pet. we created a pet robot that can perform various functions like recognise people, recognise objects, recognise emotions, play games, follow a person around, listen to voice commands, dance and so on. this paper focuses on the hardware aspect of the robot and the working of the core program (kernel). it further discusses the algorithm that was implemented in this architecture and compares the final version with other variations. our work was able to create a fully functioning ai pet robot by using just cheap commercially available development boards, motors and a desktop pc. keywords: pet, robot, hardware, kernel 1 introduction for the longest time, man has tried to seek companionship in the form of pets. there are a lot of benefits of having a pet like entertainment, provide emotional support, protect you and your belongings, etc. however it owning a pet also has its responsibilities. our project tries to create such a pet that one does not necessarily have international journal of applied sciences and smart technologies volume 2, issue 2, pages 111–136 p-issn 2655-8564, e-issn 2685-9432 112 to deal with these responsibilities but still get the benefits. the ai pet robots available in the market are no doubt innovative but also expensive. not to mention that they also are not perfect. keeping the pros and cons of these pet robot we tried to create our own version of an ai pet robot. in this paper we explain the hardware components we used and how they interact with one another to form a functioning robot that is capable of performing complex ai and cv algorithms. we discuss the various problems that we faced due to the limitations of the ai algorithms and the hardware that we used and how we overcame them. the crux of the paper is the system architecture that we used and the working of the kernel of the robot. throughout this paper all the ai algorithms mentioned are cited from this [1] paper. 2 existing system when we started our research there were already a few robots available in the market. these were quite expensive and designed by big companies like sony. as a result there were not any papers or designs available related to these robots. sony’s aibo was our main inspiration to create a pet robot. we wanted to create a similar pet but without the exorbitant cost. by researching on these pet robots we tried to understand what features made the robot more “pet” like and how we could incorporate them in our robot. all of these pets run on centralised systems. here the main decision making hardware and computing hardware are the same. this requires a custom designed computer boards which increases the production cost. the board needs to be designed keeping in mind that it has to fit inside the robot, not over-heat and be powerful enough to perform complex ai programs. these are very tight constraints and you need to give up on one to gain the other. this in turn limits the performance of the robot. 2.1 aibo sony’s attempt at an ai pet is aibo, a realistic dog based ai pet with a slew of features like facial recognition, having personalities, emotion detection, automatic battery recharging and many more. the aibo robot will do all the basic tricks like “sit”, “handshake” and others like a normal pet. in additions to these it can also take pictures and international journal of applied sciences and smart technologies volume 2, issue 2, pages 111–136 p-issn 2655-8564, e-issn 2685-9432 113 wirelessly connect to the internet. it is designed solely to be a companion and will not do simple tasks like telling the morning report or watching for intruders. however these are very expensive examples selling for upwards of . key take away from aibo is that a pet needs to be cute. sure some pets have their benefits (like dogs guard the house) but pets are kept to provide company and entertainment. 2.2 pibo pibo is a cute bi-pedal ai pet robot designed by a korean developer circulus. this robot has a few ai assistant features like alarms, weather report, message notifications, etc. other than that the robot can take photos are directly upload them to the users facebook profile. it has other playful features like stories and jokes. pibo can also play music while dancing to it. pibo can also be reprogrammed by the users giving users control over how this robot functions. this also brings more power to the community as they can design personalised programs for pibo to execute. pibo sells currently for $840. pibo is more of an ai assistant than a pet. it can perform all of the functions that an ai assistant can whilst walking around the house. this is not really a bad thing as it can fill the void of the “benefit” of having a pet point mentioned earlier. as mentioned above pibo is a biped robot however after viewing its demo videos we felt that it has very slow move-ment because of this reason. seeing that walking motion does not really bring any pet like attributes we decided to omit it and implement wheels instead. 2.3 lovots there are simpler versions of plush smart toys that are capable of being therapeutic for the elderly and specially those people suffering from anxiety and dementia. there maybe cases where senior citizens need a therapy but cannot care for one on their own. these are like stuffed animal covered electronic appliance that has sensors that allow you to interact with it. this sort of “empathy” interaction is something that is upcoming in artificial intelligence. one of the recent products in these fields are called lovots (love robot). these have been created in japan by groove x and are designed to provide humans with love or at least the perception of love. according to their website, international journal of applied sciences and smart technologies volume 2, issue 2, pages 111–136 p-issn 2655-8564, e-issn 2685-9432 114 lovot will react to your mood and do all it can to fill you with joy and re-energize you. however, these lovots also have exorbitant price tags costing upwards of . lovots show us that understanding and showing emotions are an important feature of a pet. development of this feature would make the robot very relate able. infact perfection of this feature is an important step towards creating the ideal ai pet. 3 hardware components in this section we have mentioned and explained all the hardware components that we have used in this project. we tried to keep the explanation as crisp as possible such that to only have details that are relevant to the project and possible future scope. the hardware that we have used in this project were affected by budget and availability constraints. the are many other solutions for the same hardware that can be explored. 3.1 raspberry pi 4 figure 1. raspberry pi 4 [3] raspberry pi are one of the worlds top most developer friendly computer boards available in the market. it is an economic solution to a desktop pc as it is fairly cheap and compact. raspberry pi boards use the linux based operating system named raspbian. the latest addition to the raspberry pi family is the raspberry pi 4 model b. according to the datasheet [2] the raspberry pi 4 has a quad core 64-bit arm-cortex a72 running at 1.5ghz, h.265 (hevc) hardware decode (up to 4kp60), videocore vi 3d graphics and supports dual hdmi display output up to 4kp60. this raspberry pi comes with 1, 2 and 4 gigabyte lpddr4 ram options. for the interfacing the raspberry pi has 802.11b/g/n/ac wireless lan, bluetooth 5.0 with ble, 1x sd card, 4x usb ports, 1x raspberry pi camera port, 1x gigabit ethernet port and 40 pin gpio international journal of applied sciences and smart technologies volume 2, issue 2, pages 111–136 p-issn 2655-8564, e-issn 2685-9432 115 header supporting various interface options. we can see the raspberry pi 4 board in figure 1. the role of the raspberry pi (rpi) in our project is to provide vision, communication and decision making. therefore our robot depends on the raspberry pi ability to provide a high performance cpu, ram, usb interface and camera interface. 3.2 raspberry pi noir camera v2 figure 2. raspberry pi noir camera v2 with ir torches [5] inorder to provide vision to the robot we need to have a camera installed on the robot. we have a wide range of option in todays market for cameras however we needed some-thing that was efficient and cost effective. due to rpi’s hardware, we can not only used usb cameras but also cameras designed specially for the rpi using the csi interface. the rpi noir camera v2 [4] is one such specially designed camera for the rpi. it has a high quality 8 megapixel sony imx219 image sensor and is capable of pixel static images, and also supports 1080p30, 720p60 and video. we can see the raspberry pi noir pi camera figure 2. the noir camera does not have an ir filter meaning it can also see ir light. this property when coupled with an ir torch can let the robot see even in the dark which is also one of the features of our project. in addition to that the rpi cam interface (explained in the next subsection) which is an integral part of our robot works only with csi interfaced cameras. hence the raspberry pi noir camera v2 was our best option. international journal of applied sciences and smart technologies volume 2, issue 2, pages 111–136 p-issn 2655-8564, e-issn 2685-9432 116 3.3 arduino uno figure 3. arduino uno [6] arduino uno is an open-source hardware [4]. its a development board that is great for starer level developers who want to experiment with coding and robotics. it is a micro-controller board based on the atmega328p. its operating voltage is 5v and recommended input voltage is 6-12v. it has a vin pin, 3 ground pins, a 3.3v pin, 5v pin and an aref pin. it has 14 digital pins of which 6 can provide pwm output and another 6 analog pins. it is a small lightweight board of the dimensions and weighs . figure 3 is arduino uno board displayed on the official arduino website.the board also has a usb port that can be used to program it as well as power it. we use the arduino ide software installed on a desktop computer to program the uno board. the arduino is cheaper board and also supports pwm and analog signals unlike the raspberry pi. hence we will be using this as our motor controller. international journal of applied sciences and smart technologies volume 2, issue 2, pages 111–136 p-issn 2655-8564, e-issn 2685-9432 117 3.4 arduino motor shield figure 4. arduino motor shield the arduino motor shield is designed specifically for the arduino uno to upgrade it to be able to control motors without the need of other external circuitry. there are multiple versions of the motor shield available in the market made by different companies. we chose the version that had 4 motor ports with 2 servo ports as per the requirement of the current design. the motor shield has 2 l293d ics to control the motors. it also has circuitry to take external power from a 12v battery which it uses to power the motors that it will control. we can see the arduino motor shield in figure 4. this power can also be used to power the uno as the motor shield can convert the voltage to 5v and give it to the uno. it must be noted that if we use a shield, other pins will not be available for use conventionally. if we need to access these pins we will need to solder the wiring to the back of the board. some motor shields do address this problem by providing extra pins that can connect to the uno’s unused pins (pins that are not used by the motor shield). 3.5 display inorder to give visual feedback to the user we used a 0.96 inch oled display. the display is connected to the rpi via the sda and scl pins of the i2c protocol. we followed the setup instructions as per an online article [7] for it. it has a detailed explanation of how to use the libraries installed in the setup and also has a sample code that we can use to design our own code. figure 5 shows us the oled display for arduino boards. international journal of applied sciences and smart technologies volume 2, issue 2, pages 111–136 p-issn 2655-8564, e-issn 2685-9432 118 figure 5. 0.96 inch oled display [7] 3.6 servo motors and motors for movement of the camera we will be using 2 mg90s. it is a pwm controlled servo which runs at operating voltages of 3.3v to 6v. for the bots wheels we will be using 3 standard 12v dc motors. we experimentally found out that 5v dc motors are not able to move the robot so we advice using motors in the voltage range of 6-24v. 3.7 power bank 5v inorder to use the rpi and arduino uno we need a 5v reliable power source and for that we used a 2.5a 5v 2x port power bank that is available in the market. the power bank can be also used to power other parts that are connected to the arduino like buttons, servos, lights, etc. the pins of an arduino uno can only provide an output current of safely. therefore we should use the powerbank instead to power the other parts to prevent the arduino from burning up. the plus side to this is that the arduino and the parts will share the same vin and ground making the circuitry much easier. it is much safer to use a power bank rather than batteries to power the raspberry pi as the power bank comes with the circuitry that protects the device that it is powering from mishaps. 3.8 battery 12v wheel motors that can move the bot considering the weight had to be high torque in the voltage range of 7-24v. using a 5v motor or just providing the above mentioned motors with 5v would not be suitable for the robot. therefore we need to add a 12v international journal of applied sciences and smart technologies volume 2, issue 2, pages 111–136 p-issn 2655-8564, e-issn 2685-9432 119 battery to the circuit especially for the motors. this also assists to solve the burning up problem of the arduino mentioned in the power bank subsection. 3.9 sound card raspberry pi does not have an inbuilt sound card for input and therefore we have to add a external usb sound card to attach a mic to it. it can also be used to improve the quality of output sound if better sound card is used than the one inbuilt in rpi. 3.10 speaker we used the speaker of a broken google home. it has a decent sound reach as well as was small enough to fit on the robot. inorder to amplify the output signal to the speaker we used a pam8403s amplifier chip. an alternate solution could also be to use a bluetooth speaker. 3.11 microphone the microphone needs to be small and compact. generic lapel microphones can record clear audio and are quite small so we used this for the project. any other good quality microphone can be used like a bluetooth headset mic or usb mics. 3.12 buttons and joysticks as of now we have not implemented the “hot word” response like that of google or amazon’s alexa. so the inorder to get the attention of the robot and give it commands, we need to push a button to start the voice recognition. we decided to replace that button with a joystick for a hand so instead we have to shake the hand of the robot to get its attention. international journal of applied sciences and smart technologies volume 2, issue 2, pages 111–136 p-issn 2655-8564, e-issn 2685-9432 120 3.13 chasis and wheels figure 6. robot base we used the wheel base of a broken remote controlled toy robot as seen in figure 6. the base of the toy robot had 3 wheels with motors and 8x aa battery slot. the wheels on the robot were omnidirectional wheels so just 3 wheels were enough for it to move. the motors on the toy were old and needed to be replaced. the battery slot was not big enough to fit the power bank or our battery pack. however if reshaped a bit we could fit an arduino with the motor shield in it securely. the torso of the robot is just the powerbank to which the rpi and battery are stuck with double tape. the head of the robot is the camera and the mechanism to make it move. it is a 2 servo diy mechanism that can be found in any robotics shop that lets the camera move in 2 axis 180 degrees each. 4 wiring connections we divided wiring into two aspects: data connections and electric connections. the data connections will show which component communicates with another components. the electrical connections shows which power supply provides current to which component. unless specifically mentioned which cable we have used to connect the components it can be assumed that we used standard diy development cables. international journal of applied sciences and smart technologies volume 2, issue 2, pages 111–136 p-issn 2655-8564, e-issn 2685-9432 121 4.1 data wiring figure 7. data connections the figure 7 only contains components of the robot that need logical connections. the tail of the arrows show that the data is output from there and the head of the arrow shows where the data is gonna be input. except for the connections between the raspberry pi, desktop pc and arduino, all connections are one way connections as they are either sensors or motors. the raspberry pi and the pc are connected wirelessly using a wifi network. the raspberry pi and the arduino are connected via a usb cable and communicate with each other via the serial communication. we experimentally found out that the baud rate of 115200 was the most optimal for our implementation. it fit into international journal of applied sciences and smart technologies volume 2, issue 2, pages 111–136 p-issn 2655-8564, e-issn 2685-9432 122 the sweet spot where there was quicker communication without putting stress on the serial ports. the microphone is connected via a sound card. the speaker component can be connected to the raspberry pi via the sound card’s port or bluetooth. 4.2 electric wiring figure 8. power connections there are 2 main sources of power for the robot ie. the power bank and the battery pack. the power bank is 5v while the battery pack is 12v. as most of the components work at 5v the power bank powers most of the robots circuitry as shown in shown in figure 8. if we choose a power bank with more number of ports it becomes easier to design the wiring. care must be taken that the power bank must have enough amperage to be able to be able to handle the requirements of all the parts of the robot. else we would need another power bank or a more efficient power source. the battery pack is used solely to power the arduino’s motor shield and hence the motors. we need to take care that the motor shield is not supplying power to the uno otherwise due to some reason the voltage regulator on the uno gets fried, disabling the ability of the uno to take power from the barrel adapter port or the usb. due to our lack of expertise in this international journal of applied sciences and smart technologies volume 2, issue 2, pages 111–136 p-issn 2655-8564, e-issn 2685-9432 123 field we could not figure out why this is happening but we have reason to believe that it is caused because of the current from the usb port and the vin to the uno provided by the motor shield. 5 software components in this section we have explained all the softwares that are needed to use the hardware that we used for the robot. 5.1 raspberry os raspberry pi os (previously called raspbian) is the foundation’s official supported operating system [8]. we used the version buster for our implementation. raspbian buster has further two types: headed and headless. the headed version means the interface has a gui which is displayed via its hdmi while headless means it has a command line interface and its display ports are disabled. having a gui also means that it will have a lot of bloatware and dedicate its resources to provide graphics to the user. since we need high performance from the rpi we chose to go with the headless version. one can go to the official raspberry pi website to download it and follow the instructions to install it [8] on your raspberry pi 4. 5.2 putty since we are using the headless version, its display ports are disabled. therefore if we want to interact with the rpi we need to access it remotely. this can be achieved via the ssh protocol by using putty. for a tutorial on how to install and use putty and use it to connect to you rpi you can view this course [9]. 5.3 opencv opencv is a programming library mainly aimed at computer vision functions. if you want to follow our final implementation then you have to install it on the pc like a standard python library. else if you want to use the centralised architecture (explained in the further sections) you will need to install opencv on your rpi. this is a special opencv library where the code is optimised for the hardware architecture of the rpi for international journal of applied sciences and smart technologies volume 2, issue 2, pages 111–136 p-issn 2655-8564, e-issn 2685-9432 124 maximum performance. it is highly advised to use the headless version of raspbian if one wants to use opencv as it uses too many resources. there are multiple articles online that can help you install opencv for rpi like the one we followed [10]. we advise to find an updated article yourself because during our attempts to install opencv and other libraries we experienced difficulties because of miss match of the versions of raspbian, opencv and the steps that we needed to execute. 5.4 rpi cam web interface rpi cam web interface is a web interface designed specially for the rpi cameras. it sets up a local webpage which lets us control the rpi’s camera. the webpage is set up at the raspberry pi’s ip address and can be easily accessed via a mobile or a computer. this web page is one of the means of communication for the rpi and the pc in our implementation. it only supports the csi raspberry pi camera and not the usb cameras. we can learn more about it and install it from its website [11]. 6 system architecture when we first designed the robot it was meant to run just purely on the raspberry pi. however we soon realised after our implementation with just one feature ie. face detection and tracking, was too much for the rpi to handle. and that was the simplest of the computer vision algorithms of the robot. it became clear to us that if we wanted the robot to work reliably then we needed either better onsite hardware or process the information elsewhere. due to constraints explained below we had to go with the latter which changed the system architecture entirely. we had to introduce a pc to process the data. in the current implementation the pc and the rpi have to be in the same network to run. future scope could be to implement this on a cloud computer. 6.1 centralised architecture in this architecture there is only one computing unit: raspberry pi. we were able to implement only couple of features of the robot before the raspberry pi started to throttle. for this architecture, the raspberry pi was the heart of the system. the video stream from the camera was processed by the raspberry pi and even the motors were international journal of applied sciences and smart technologies volume 2, issue 2, pages 111–136 p-issn 2655-8564, e-issn 2685-9432 125 controlled by it. we first implemented a face tracking algorithm where the raspberry pi would detect a face using haar cascade and then control the motors fixed to the camera such that it followed the face. at the start there were not any problems and it seemed like the raspberry pi could handle it. however at around the 7th minute the raspberry pi began to heat up and throttle which reduced the throughput of frames. it was around the 12th minute that the raspberry pi could only process one frame per second. this was unacceptable as it also introduced a considerable amount of latency to the robot. inorder to implement our project we needed a better computer or an alternate approach. it was also during this phase that we realised some other mistakes. we realised that the raspberry pi is susceptible to reverse/induced current from the motors resulting in unresponsive pins. so it was imperative that we introduce a motor controller to do that task. the other problem was that the raspberry pi only has digital pins. most of the sensors feed analog data and thus would require a micro-controller in the middle to translate the information. this problem was rectified in the decentralised version by introducing an arduino uno. 6.2 problems of centralised architecture while creating our robot we designed two versions one with centralised architecture the other with a decentralised architecture. the following were the evident problems of the centralised system. 6.2.1 power the seamless interaction of our pet with humans was the evaluation criteria for our project. for this our cv and ai programs had to work seamlessly and that is only possible if the computer can handle that kind of load. 6.2.2 size as we were creating a cute pet robot it is obvious that the robot cannot be large. that means the computer, motors and mechanisms have to all fit in a small robot. in order to run complex cv and ai algorithms the processor needs to be powerful enough but mostly powerful computers are larger and require a good cooling system which further adds to the size. 6.2.3 cost international journal of applied sciences and smart technologies volume 2, issue 2, pages 111–136 p-issn 2655-8564, e-issn 2685-9432 126 in order to run ai programs the robot needs to have a powerful computer. and that comes at a cost. not to mention that the size of the computer needs to be small and light. that constraint again adds to the cost. and when we cannot find such computers, development of our own board is the only option which is very expensive. 6.2.4 compatibility the computer board also need to be compatible with the other parts of the robot like the motors, camera and other sensors. so we need a computer that is also good at running ai algorithms but also has the architecture for robotics. 6.3 decentralised architecture figure 9. decentralised architecture inorder to overcome the drawbacks of the centralised architecture we came up with this decentralised architecture. in this architecture the computer vision data is not processed at the rpi but at a desktop computer. thus all the information is not processed at just one place, but rather multiple places relieving the load on the rpi. while technically this does increase the cost of our implementation, a pc or a laptop is already international journal of applied sciences and smart technologies volume 2, issue 2, pages 111–136 p-issn 2655-8564, e-issn 2685-9432 127 available with everyone. we used the following design for our decentralised architecture explained with the aid of figure 9. 6.3.1 brain (pc) in our architecture the pc is considered as the brain of the robot. it is responsible for performing the complex computations and execution of the cv and ai algorithms of the robot. the video stream sent by the raspberry pi (spine) is processed by the pc and then the output is sent back to the raspberry pi. the output was designed to be simple instructions that the raspberry pi had to execute. this released a significant burden on the on-board processor of the robot (in this case raspberry pi) which made it possible to have a light weight computer to be used in the robot. for our final implementation we used a computer with the following specifications: cpu: ryzen 2700x gpu: nvidia gtx 10 ram: 16 gb dual channel router: 300 mbps 6.3.2 spine (raspberry pi) spine or the raspberry pi which was the heart of the centralised structure acts as the kernel of the system. its primary duty is to coordinate the arduino uno, pi cam and pc. it has a microphone to take voice commands from the user. the raspberry pi is also responsible for determining the current mode of the robot. the arduino and pc have to switch modes as per the request of the raspberry pi. the reason why we call the raspberry pi as the spine of the system is because it acts as a bridge between the pc and arduino. the pc sends instructions to the raspberry pi. the raspberry pi then evaluates the instruction. if the instruction was for the arduino, then the raspberry pi forwards the instruction to the arduino. along with that, the raspberry pi also acts as a bridge between the camera and the pc. 6.3.3 eyes (noir camera) to provide the robot with vision we used a noir pi camera v2. this camera when paired with ir torches provide vision to the robot even when there is no light. this also helps to get a proper illuminated image for the pc to process as the ir torches act like a flash light. this feature is a great help to our face recognition program because the ir international journal of applied sciences and smart technologies volume 2, issue 2, pages 111–136 p-issn 2655-8564, e-issn 2685-9432 128 light is invisible to the human eyes and hence does not irritate the user even when the camera is pointed directly to the face of the user. 6.3.4 limbs (arduino uno) to control the motor movements of the robot we chose to use an arduino uno with a motor shield. the arduino receives instructions from the raspberry pi via the serial bus. it then manipulates the motors as per the given instructions. it also processes data received from the sensors connected to it and only forwards data to the raspberry pi when an event occurs (an event is when the data from the sensors satisfy a particular condition). this reduces the strain on the serial bus thus even the raspberry pi. it also allows the arduino to control the motors without consulting the raspberry pi or pc in emergency situations. eg. when the distance sensor alerts the arduino that the robot is close to an object, the arduino can stop the motors at that very moment rather than wait for the data to reach the raspberry pi and come back. 7 internal communication this section explains how each processing component communicated with each other. we divided the explanation component wise so one can focus on all the communication requirements between those components. the methods mentioned in this sections are from our implementation of the robot. there are many other methods in which we can carry out the same objective and a detailed study of which method is most efficient could help improve the robot. 7.1 raspberry pi and pc the rpi and the pc are the places where most of the data is communicated and processed. the 2 main types of data that needs to be communicated between them is a video stream and instructions in the form of characters. 7.1.1 socket we used a python socket program for instructions. the pc was the server and the raspberry pi was the client. the socket program was two way communication where the pc sends instructions to the raspberry pi while the raspberry pi uses it to alert the pc of mode changes. the instructions sent by the pc can be either for the raspberry pi international journal of applied sciences and smart technologies volume 2, issue 2, pages 111–136 p-issn 2655-8564, e-issn 2685-9432 129 or for the arduino. it is the raspberry pi’s responsibility to forward that command to the arduino. the instruction is just a character (due to a small instruction set. the more functionalities are added to the bot, the more number of instruction codes will be required). the rpi receives the character and uses it to check which instruction the pc is telling the rpi to do. at the same time the rpi can send characters to the pc to tell the pc which program it has to run. 7.1.2 socket for images in the early stages we tried to send frames of the videos to the pc via the socket program. however this was very inefficient and would often lead to asynchronization. the raspberry pi would send frames faster than the pc could process the frames. these frames would pile up in the buffer. the pc would process frames sequentially and thus any lag that occurred would become permanent. as the lag would start to pile, it would completely go out of sync and process frames about 5 seconds later than it was supposed to. eventually we gave up on this concept and focused on the rpi cam interface. 7.1.3 rpi cam interface we used the rpi cam interface to set up a local web server where we hosted the video stream from the pi camera. so the pc could access the video feed via the url whenever it wanted. this gave the pc the freedom to read frames whenever it was free. it was not forced to read stale frames from the buffer as mentioned in the previous algorithm. this removed the piling up of lag problem that we faced before. however it restricted our system to run on the same network as we were hosting a local web server. 7.2 raspberry pi and arduino uno the arduino uno and raspberry pi are boards that are on the robot itself. hence it is possible to have a wired connection between them. to achieve this we used serial communication via usb. the raspberry pi sends commands and also forwards commands from the pc to the arduino via this interface. the arduino on the other hand uses this interface to send sensor values to the raspberry pi. international journal of applied sciences and smart technologies volume 2, issue 2, pages 111–136 p-issn 2655-8564, e-issn 2685-9432 130 8 implementation in this section we will explain the step by step working of the hardware of the robot. however before that it is important to understand what instruction set and mode are with respect to our project. the rest of the terms are explained in detail in the earlier sections. 8.1 instruction set our pet robot has many features like face tracking, face detection, follow around, mini games, etc. all of these algorithms do not run simultaneously. each of them have some kind of instructions that needs to be passed around. there are sometimes errors in received messages due to connection issues. eg if the arduino is trying to send “a” the raspberry pi might read it as “b” maybe because the cable moved a little bit. in such situations if there are individual codes for individual instructions then there is a possibility that an interpretation of the instruction might belong to some other algorithm. this would lead to the robot performing random functions which might not be in the scope of the current algorithm to stop. one such situation occurred when the forward wheel movement instruction from the follow algorithm was executed by the arduino while the robot was just supposed to look for faces as per the face tracking algorithm. not only did the it execute a wrong instruction but also there was no way of stopping it as the counter to it ie. the stop instruction, can only be executed by the instruction sent by the follow algorithm. therefore we separated the instructions as per the algorithms that are supposed to be executing currently. we called them modes. thus only the instruction which belonged to the current mode were executed while the rest were ignored. this also gave us the opportunity to reduce the size of the instruction set where we could have one character (code) be interpreted as different instructions depending on the current mode. the decision of which algorithm is to be executed is decided by the mode the bot is in. and that is decided by the rpi as mentioned earlier. if the bot is in face tracking mode, then the pc will run the face tracking algorithm and give instructions in the face tracking instruction set. similarly the rpi and arduino will perform their algorithms and interpret instructions from the face tracking instruction set international journal of applied sciences and smart technologies volume 2, issue 2, pages 111–136 p-issn 2655-8564, e-issn 2685-9432 131 8.2 working 1. the pc starts the socket server. 2. the rpi starts a socket client and tries to connect to the pc. 3. on establishment of connection the rpi starts a web server via the rpi cam web interface. 4. the raspberry pi starts the face detection and tracking mode. 5. the pc takes frames from the web server and runs the cv algorithm depending on the robots mode. 6. the pc then sends instruction to the rpi via the socket. 7. on receiving the instructions the rpi interprets it and executes it. 8. if the instruction is for the arduino, the rpi forwards the instruction to the arduino via serial communication. 9. the arduino then interprets the instruction and executes it. 10. if the arduino’s sensors send important information to the arduino, the arduino will interpret it. 11. the arduino might take action. or 12. the arduino will alert the rpi of the sensors information. 13. the rpi will then interpret the information and take action. 14. if the information was a handshake (movement of the joystick up and down) then the rpi will listen to the user for commands. 15. the rpi will then interpret the commands, decide which mode the robot needs to be in and inform the pc and arduino of the mode change. 16. on receiving the mode change the pc and arduino will acknowledge it and change its mode. steps 5 onward run on loop till the robot is turned off. it should be noted that the rpi cam web interface runs as a different user on the the rpi and does not follow or interfere with the flow execution of the main code of the robot international journal of applied sciences and smart technologies volume 2, issue 2, pages 111–136 p-issn 2655-8564, e-issn 2685-9432 132 9 evaluation in our efforts to improve the performance of the robot we tested the changes made to our architecture against the face tracking algorithm. we also used pcs with different specifications. as a standard we used video stream of 800x600p resolution as the input to our algorithm. the criteria for evaluation was fps, avg latency and latency after 7 mins. we also capped the frames per second at 60fps. we have the following pc specifications for the test. raspberry pi 4: quad-core arm cortex-a72 processor, 4 gb ram pc a: intel i7 cpu, amd radeon r5 m335 gpu, 16 gb ram pc b: intel i7 cpu, nvidia 970m gpu, 16 gb ram pc c: amd ryzen 2700x cpu, nvidia 1070 gpu, 16 gb ram 10 results as per figure 10 we can see that the more powerful the computer is, the lesser is the latency. we can also see from figure10 that when the pc takes frames from the raspberry pi, it does not have a lag build up as opposed to when the raspberry pi pushes images to the pc. this can be credited to the fact that the pc only gets fresh frames as it reads frames only when it has finished processing the previous frame. as per figure 11 and figure 12 we can see that without using the rpi cam interface the video would have a large amount of lag thus making it impractical for the robot. figure 10. frames per second international journal of applied sciences and smart technologies volume 2, issue 2, pages 111–136 p-issn 2655-8564, e-issn 2685-9432 133 figure 11. latency in ms figure 12. latency in ms (after 7 mins) international journal of applied sciences and smart technologies volume 2, issue 2, pages 111–136 p-issn 2655-8564, e-issn 2685-9432 134 11 conclusion figure 13. ai pet robot we were successfully able to design the hardware of the robot that can run complex ai and cv algorithms. the pet robot is shown in figure 13. the robot was small enough that it could be carried in our hands. the architecture that we propose can be easily scaled to make one pc handle multiple robots. this, coupled with the concept of cloud computing would significantly reduce the cost in the case of mass production as the pc was the most expensive component. references [1] a. ghoshal, e. lemos, a. aspat, “opencv image processing for ai pet robot.” unpublished, 2020. [2] raspberry pi foundation, “raspberry pi 4 model b data sheet.” june 2019. [3] raspberry pi foundation, https://www.raspberrypi.org/products/raspberry-pi-4model-b/ international journal of applied sciences and smart technologies volume 2, issue 2, pages 111–136 p-issn 2655-8564, e-issn 2685-9432 135 [4] raspberry pi foundation, “raspberry pi noir camera model v2 data sheet.” 2016. [5] t.k. hareendran, “night vision camera adapter.” electro schematics, https://www.electroschematics.com/night-vision-camera/ [6] raspberry pi foundation, “raspberry pi noir camera model v2 data sheet.” 2016. [7] matt, “using an i2c oled display module with the raspberry pi.” raspberrypispy, https://www.raspberrypi-spy.co.uk/2018/04/i2c-oled-display-module-withraspberry-pi/, april 2018. [8] raspberry pi foundation, https://www.raspberrypi.org/downloads/raspberry-pios/ [9] p. dalmaris, “raspberry pi: full stack.” section3: getting started. [10] s. malik, “install opencv 4 on raspberry pi.” learn opencv, https://www.learnopencv.com/install-opencv-4-on-raspberry-pi/, november 2018. [11] silvanmelchior, “rpi cam web interface.” elinux, https://elinux.org/index.php?title=rpi-cam-web-interface&action=edit, 2013. international journal of applied sciences and smart technologies volume 2, issue 2, pages 111–136 p-issn 2655-8564, e-issn 2685-9432 136 this page intentionally left blank international journal of applied sciences and smart technologies volume 5, issue 1, pages 67-74 p-issn 2655-8564, e-issn 2685-9432 67 this work is licensed under a creative commons attribution 4.0 international license laser based vibration sensor through mobile r. k. mahapatra1,*, shalini j. yadav2 and rajan yadav2 1department of electronics and telecommunication thakur college of engineering and technology mumbai, india 2student of electronics and telecommunication thakur college of engineering and technology mumbai, india *corresponding author: mail2rashmita@gmail.com (received 29-05-2022; revised 22-01-2023; accepted 19-01-2023) abstract machine condition monitoring has gained momentum over the years and becoming an essential component in the today’s industrial units. a cost-effective machine condition monitoring system is need of the hour for predictive maintenance. the paper presents the design and implementation using vibration sensor, and also this system operated through smartphones. vibration analysis plays a major role in detecting machine defects and developing flaws before the equipment fails and potentially damages. the concept of this project was to detect faulty equipment in industry machine so that before damaging the whole machine faulty equipment can be replace and improve the durability of machine. keywords: vibration sensor, ldr sensor, smartphone, raspberry pi. 1 introduction vibration measurement using different signal processing with suitable set-up data is a powerful tool to identify and predict failure. conducting different vbration analysis techniques could lead to improve machine efficiency and availability. monitoring the vibration characteristics of a machine can provide the information of its health condition, and this piece of information can be used to detect problems that might be incipient or developing. there are two ways for analysis with contact and without contact here this project is based on non-contact analysis. non-contact analysis based on laser-based vibration sensor. usually in the contact type vibration sensing, the sensor is http://creativecommons.org/licenses/by/4.0/ international journal of applied sciences and smart technologies volume 5, issue 1, pages 67-74 p-issn 2655-8564, e-issn 2685-9432 68 attached to the machines or instruments in order to detect the vibration amplitude and frequency. due to accessibility issues or the fact that the contact sensor adds mass to the instrument or machine and might change its vibration characteristics, the addition of a contact sensor is sometimes not practical in situations where precise vibration measurement is needed or in toxic and hazardous environments. nonetheless, noncontact analysis is more affordable, requires less human labour, and produces better findings. in industry, machine monitoring necessary so that every machine can function properly and do not affect the production of plant. this project gives the solution for this problem by checking vibration level of machine if vibration level increases it will give alert so that faulty equipment can replace on time and production of plant can go on. 2 research methodology non-contact vibration using a laser for structural cable health monitoring [1]. to measure cable vibration, a non-contact remote sensing laser vibrometer is utilized. it is now necessary for someone to gather vibration data. in the future, it will be upgraded to bluetooth connectivity so that it may be managed from a secure location. the accuracy of the project can be increased by increasing the frequency. it can be modified in such a way that all parameters such as vibration, force, and damping ratio can be observed in a single device at the same time. in the development of laser vibrometer [2]. the author has used the optical triangulation principle, with the laser source, target, and detection system forming the three vertices of a triangle. the laser beam strikes the target, and the backscattered light is collected by the detection system. the frequency range between 0-1khz so it can be increased to 0-1ghz so that reading appear can be accurate. next the development of non-contact structural health monitoring system for machine tools[3]. a real time structural health monitoring (shm) is a paramount for machining processes during machining, vibrations are always brought forth because of mechanical disturbances from various sources such as engine, a sound and noise etc. international journal of applied sciences and smart technologies volume 5, issue 1, pages 67-74 p-issn 2655-8564, e-issn 2685-9432 69 the purpose of shm is to avoid wasteful activities to optimize profitability of products and services to improve the information obtained about the condition of the machine tools being used. development of non-contact structural health monitoring system for machine tools. machine condition monitoring has gained momentum over the years and becoming an essential component in the today’s industrial units. basic block diagram of proposed system is shown in fig. 1. figure 1. basic circuit connection predictive maintenance urgently requires a system for monitoring machine status that is both affordable and effective [4,5]. vibration that causes no damage is likewise highly helpful [6]. simultaneous multidimensional measurements are feasible [7]. additionally, cable-stayed bridges are incredibly efficient [8]. in the modern world, sensors play a significant role [9, 10].in this paper, we have developed a machine condition monitoring system using smart phone, thanks to the rapidly growing smartphone market both in scalability and computational power. in spite of certain hardware limitations, this paper proposes a machine condition monitoring system which has the tendency to acquire data, build the fault diagnostic model and determine the type of the fault in the case of unknown fault signatures. results for the fault detection accuracy are presented which validate the prospects of the proposed framework in future condition monitoring services. international journal of applied sciences and smart technologies volume 5, issue 1, pages 67-74 p-issn 2655-8564, e-issn 2685-9432 70 results and discussions the basic circuit connection is shown in fig. 2.vibration sensor is implemented using a laser and ldr sensor in fig. 3 which are fixed on a wooden board in straight formation. a tube is placed in front of ldr to block noise light. figure 2. basic circuit connection figure 3. basic physical circuit connection international journal of applied sciences and smart technologies volume 5, issue 1, pages 67-74 p-issn 2655-8564, e-issn 2685-9432 71 figure 4. laser based vibration sensor output the data of fig. 3 is sent to raspberry pi, which is then displayed on default monitor. the raspberry pi code also host a local website which is used to plot value graph and display it. table 1. reading value obtained on serial monitor ip address sensor output 14:44:26.567 vibration sensor 176 14:44:26.567 vibration sensor 176 14:44:26.567 vibration sensor 176 the default monitor that shows real-time value which is represented in table 1 as it shows at very fast speed. the high-speed value recording makes this system accurate and it makes monitoring easier. 3 conclusions in this paper, a vibration sensor that is used to detect vibrations in machine parts or structures was deployed using a laser and ldr sensor. the approach used in this paper was non-contact based because of use of laser which is an upper hand compared to other methods present in market to measure vibrations. with further evolution in light sensing devices and sensors this project can gain many advantages over other methods. with present generation technology and components this system is already accurate and can detect very slight changes in vibrations. international journal of applied sciences and smart technologies volume 5, issue 1, pages 67-74 p-issn 2655-8564, e-issn 2685-9432 72 acknowledgements we would like to acknowledge tcet for providing us with a platform to instill research qualities and providing us with a medium to share and present our ideas. we would also like to thank our mentor for guiding us throughout the entire journey. reference [1] mehrabi, armin b., and saman farhangdoust, a laser-based noncontact vibration technique for health monitoring of structural cables: background, success, and new developments, advances in acoustics and vibration (2018). [2] rawat, aseem singh, and nitin kawade, development of laser vibrometer, barc newsletter (2016) 16. [3] goyal, deepam, and b. s. pabla, development of non-contact structural health monitoring system for machine tools, journal of applied research and technology, 14 (4) (2016) 245-258. [4] korkua, suratsavadee, et al. wireless health monitoring system for vibration detection of induction motors, 2010 ieee industrial and commercial power systems technical conference-conference record, ieee, (2010). [5] gondal, iqbal, muhammad farrukh yaqub, and xueliang hua, smart phone based machine condition monitoring system, international conference on neural information processing. springer, berlin, heidelberg, (2012). [6] yen, wen-huei p., armin b. mehrabi, and habib tabatabai. evaluation of stay cable tension using a non-destructive vibration technique, building to last, asce, (1997). [7] kulkarni, rishikesh, and pramod rastogi, simultaneous estimation of multiple phases in digital holographic interferometry using state space analysis, optics and lasers in engineering, 104 (2018) 109-116. [8] mehrabi, armin b., in-service evaluation of cable-stayed bridges, overview of available methods and findings, journal of bridge engineering 11 (6) (2006) 716724. international journal of applied sciences and smart technologies volume 5, issue 1, pages 67-74 p-issn 2655-8564, e-issn 2685-9432 73 [9] mohapatra, badri narayan, et al. easy performance based learning of arduino and sensors through tinkercad, international journal of open information technologies, 8 (10) (2020) 73-76. [10] mohapatra, badri narayan, et al, smart performance of virtual simulation experiments through arduino tinkercad circuits. perspectives in communication, embedded-systems and signal-processing-pices 4 (7) (2020) 157-160. international journal of applied sciences and smart technologies volume 5, issue 1, pages 67-74 p-issn 2655-8564, e-issn 2685-9432 74 this page intentionally left blank international journal of applied sciences and smart technologies international journal of applied sciences and smart technologies volume 3, issue 1, pages 93–100 p-issn 2655-8564, e-issn 2685-9432 93 effects of shock wave phenomenon on different convergent lengths in the mixing chamber of the steam ejector stefan mardikus1,* 1department of mechanical engineering, faculty of science and technology, sanata dharma university, yogyakarta, indonesia *corresponding author: stefan@usd.ac.id (received 26-03-2021; revised 24-04-2021; accepted 05-05-2021) abstract the shock wave phenomenon is a phenomenon in a steam ejector that caused when the working fluid has high pressure, and suddenly it turns into low pressure and high speed. the shock wave effect will be investigated to the different convergent length in the mixing chamber to find the highest entrainment ratio as the performance of steam ejector. operating pressure in the primary flow was in the range 0.68 mpa 1.39 mpa, and the secondary flow was set 0.38 mpa to 0.65 mpa. the result of this study demonstrated that the highest entrainment ratio occurred in the convergent length of 69 mm. keywords: steam ejector, shock wave, convergent length 1 introduction the steam ejector is a utilization that is used to recover waste energy from low pressure fluid (secondary pressure) and low thermal energy to high-pressure fluid (primary pressure) without the use of the moving part and electrical energy international journal of applied sciences and smart technologies volume 3, issue 1, pages 93–100 p-issn 2655-8564, e-issn 2685-9432 94 resources [1]. on the other hand, the steam ejector can be used to mixing fluid in the chemical industry when the fluid has different physical properties through operating condition by pressure or temperatures of fluid. the steam ejector system was applied in many industries as power plant, refrigeration, food processing technology, etc. for example, the difficult problem to handle the liquid or gas corrosively in process industries, the ejector can pump this fluid through different pressure when the primary fluid passed part of the nozzle; thus; the secondary fluid can be moved to the mixing chamber area [2]. the primary fluid that has high pressure and temperature when passed the nozzle decreased the pressure and temperature. moreover, the fluid flow of the primary fluid shaped expansion angle in the mixing chamber; therefore, the lowpressure fluid as secondary fluid can entrain the ejector’s system. in the chemical industry, the steam ejector was used to mix the substance with different properties of physics. furthermore, the steam ejector model geometry consists of the nozzle and mixing chamber area like convergent, throat and divergent is an essential part to enormously improve the steam ejector performance [3]. in many investigations of steam ejector performance, navid and majid optimized the nozzle to reduce the steam ejector system's energy across the nozzle geometries. the investigation proved that the nozzle geometries' improvement could augment the entrainment ratio of steam ejector [4]. meanwhile, the shockwave phenomenon in the convergent and divergent nozzle was evaluated by yinhai that the increase of shockwave length can decrease characteristic of the steam ejector performance [5]. the shockwave phenomenon appears when the working fluid has high pressure, and suddenly it turns into low pressure and high speed. furthermore, the expansion angle of fluid flow will show. one of the strategies to improve the steam ejector's performance will be investigated based on the shockwave phenomenon in the mixing chamber area. this paper will be experimentally evaluated by the different convergent length that can provide shock wave phenomenon with several operating pressure conditions in primary and secondary flow to find the highest entrainment ratio at the different of convergent length. international journal of applied sciences and smart technologies volume 3, issue 1, pages 93–100 p-issn 2655-8564, e-issn 2685-9432 95 p primary pressure t primary temperature p secondary pressure t secondary temperature p outlet pressure t outlet temperature flowmeter check valve condensor evaporator compressor steam ejector major pipeline figure 1. the schematic of steam ejector experimental. 2 research methodology the schematic of the experimental steam ejector can be seen in figure 1. the part of the main experimental setup consists of a steam ejector (1), compressor (2), condenser (3), an evaporator (4). in this research, 1 pk compressor used to compress the primary fluid. the characteristic of the primary fluid, secondary fluid, and discharge fluid was measured by thermocouple type k and pressure gauge. primary pressure and secondary pressure can be set using regulator valve. the primary pressure set to operate 0.68 mpa to 1.39 mpa and the secondary pressure 0.38 mpa to 0.65 mpa. this study uses an r-22 as the working fluid. the condenser was an air cooled plate heat exchanger constructed from a 3/8inch diameter cooper pipe. the steam ejector geometry model can be seen in figure 2, where the convergent length will be investigated with the different model. three types of models of convergent length are 51 mm, 69 mm, and 75 mm. international journal of applied sciences and smart technologies volume 3, issue 1, pages 93–100 p-issn 2655-8564, e-issn 2685-9432 96 figure 2. geometry model of the steam ejector component. 3 results and discussion as shown in figures 3-6, our experiment results showed that the optimum entrainment ratio with different convergent length was on the convergent length 69 mm for all secondary pressure due to the shock wave phenomenon that appeared when the working fluid through the nozzle part decreased pressure and increasing velocity suddenly occurred. increasing the primary pressure can augment the mass flow rate of working fluid into the mixing chamber. 50 55 60 65 70 75 0.1 0.2 0.3 0.4 0.5 0.6 e n tr a in m e n t r a ti o convergent length [mm] secondary pressure 0.38 mpa primary pressure 0.68 mpa primary pressure 0.86 mpa primary pressure 1.03 mpa primary pressure 1.20 mpa primary pressure 1.39 mpa figure 3. effect of entrainment ratio versus the convergent length variations on secondary pressure 0.38 mpa in case operations of primary pressure international journal of applied sciences and smart technologies volume 3, issue 1, pages 93–100 p-issn 2655-8564, e-issn 2685-9432 97 figure 4. effect of entrainment ratio versus the convergent length variations on secondary pressure 0.45 mpa in case operations of primary pressure figure 5. effect of entrainment ratio versus the convergent length variations on secondary pressure 0.52 mpa in case operations of primary pressure 50 55 60 65 70 75 0.1 0.2 0.3 0.4 0.5 0.6 e n tr a in m e n t r a ti o convergent length [mm] secondary pressure 0.45 mpa primary pressure 0.68 mpa primary pressure 0.86 mpa primary pressure 1.03 mpa primary pressure 1.20 mpa primary pressure 1.39 mpa 50 55 60 65 70 75 0.1 0.2 0.3 0.4 0.5 0.6 e n tr a in m e n t r a ti o convergent length [mm] secondary pressure 0.52 mpa primary pressure 0.68 mpa primary pressure 0.86 mpa primary pressure 1.03 mpa primary pressure 1.20 mpa primary pressure 1.39 mpa international journal of applied sciences and smart technologies volume 3, issue 1, pages 93–100 p-issn 2655-8564, e-issn 2685-9432 98 figure 6. effect of entrainment ratio versus the convergent length variations on secondary pressure 0.58 mpa in case operations of primary pressure figure 7. effect of entrainment ratio versus the convergent length variations on secondary pressure 0.65 mpa in case operations of primary pressure 50 55 60 65 70 75 0.1 0.2 0.3 0.4 0.5 0.6 e n tr a in m e n t r a ti o convergent length [mm] secondary pressure 0.58 mpa primary pressure 0.68 mpa primary pressure 0.86 mpa primary pressure 1.03 mpa primary pressure 1.20 mpa primary pressure 1.39 mpa 50 55 60 65 70 75 0.1 0.2 0.3 0.4 0.5 0.6 e n tr a in m e n t r a ti o convergent length [mm] secondary pressure 0.65 mpa primary pressure 0.68 mpa primary pressure 0.86 mpa primary pressure 1.03 mpa primary pressure 1.20 mpa primary pressure 1.39 mpa international journal of applied sciences and smart technologies volume 3, issue 1, pages 93–100 p-issn 2655-8564, e-issn 2685-9432 99 in figure 7, the highest entrainment ratio occurred at the primary pressure of 0.68 mpa. this experiment was found that operating of primary pressure could effect the entrainment ratio as the performance of steam ejector. at the same of the primary pressure, when the convergent length more extensive, the entrainment ratio slightly decreased at all of operating pressure in secondary because there was a small expansion angle in convergent length that more extensive [6]. when it happened, the mass flow rate of secondary pressure slightly reduced to move into the steam ejector's mixing chamber [5]. this phenomenon was caused by entraining duct shorter in the suction chamber area [2]. 4 conclusion based on experimentally evaluated results, the different convergent length in the mixing chamber could influence the steam ejector performance like an entrainment ratio. the shock wave phenomenon occurred when working fluid passed the nozzle area, and it could provide the diamond shock wave in the mixing chamber. the diamond shock wave phenomenon became one of the important things that caused the entrainment of steam ejector was augmented; thus; on this investigation, the highest entrainment ratio was occurred in the convergent length of 69 mm. references [1] j. dong, x. chen, w. wang, c. kang, and h. ma, “an experimental investigation of steam ejector refrigeration system powered by extra low temperature heat source.” int. commun. heat mass transf., 2017. [2] v. v. chandra and m. r. ahmed, “experimental and computational studies on a steam jet refrigeration system with constant area and variable area ejectors.” energy convers. manag., 2014. [3] n. ruangtrakoon, s. aphornratana, and t. sriveerakul, “experimental studies of a steam jet refrigeration cycle: effect of the primary nozzle geometries to system performance.” exp. therm. fluid sci., 2011. [4] n. sharifi and m. sharifi, “reducing energy consumption of a steam ejector through experimental optimization of the nozzle geometry.” energy, 2014. international journal of applied sciences and smart technologies volume 3, issue 1, pages 93–100 p-issn 2655-8564, e-issn 2685-9432 100 [5] y. zhu and p. jiang, “experimental and analytical studies on the shock wave length in convergent and convergent-divergent nozzle ejectors.” energy convers. manag., 2014. [6] y. wu, h. zhao, c. zhang, l. wang, and j. han, “optimization analysis of structure parameters of steam ejector based on cfd and orthogonal test.” energy, 2018. international journal of applied sciences and smart technologies international journal of applied sciences and smart technologies volume 4, issue 2, pages 123–130 p-issn 2655-8564, e-issn 2685-9432 123 sugarcane production modeling using machine learning in western maharashtra chhaya narvekar1, *, madhuri rao 2 1department of information technology, xavier institute of engineering , mumbai , india 2thadomal shahani engineering college , mumbai, india *corresponding author: chhaya.n@xavier.ac.in (received 01-05-2022; revised 20-07-2022; accepted 23-07-2022) abstract agriculture is the most important sector in the indian economy. india is the world's second-largest producer of sugarcane. study is undertaken at shirol tehsil. kolhapur district, maharashtra state, india with the aim of modeling sugarcane production forecasting using supervised machine learning algorithms. sugarcane is mostly cultivated crop in this area. we applied supervised machine learning for forecasting the productivity of sugarcane village wise based on the ten year’s data about sugarcane production from the year 2010 to 2020. sugarcane yield prediction accuracy is around 65%, which is only based on data provided by sugar factory. keywords: sugarcane, productivity, machine learning, forecasting. 1 introduction the indian economy relies heavily on sugarcane cultivation. sugar, as well as enterprises manufacturing alcohol, paper, chemicals, and animal feed, rely on it for raw materials. in india, sugarcane production is processed through a network of sugar mills, as well as various other businesses and backward and forward connections. the demand this work is licensed under a creative commons attribution 4.0 international license http://creativecommons.org/licenses/by/4.0/ international journal of applied sciences and smart technologies volume 4, issue 2, pages 123–130 p-issn 2655-8564, e-issn 2685-9432 124 for higher sugarcane production in india is increasing due to the multi-purpose usage of sugarcane in india and its byproducts in numerous sectors [1]. despite rising urbanization around the world, agriculture remains the primary source of income for a huge percentage of the people. although technological developments have resulted in more accurate weather predictions and increased yields, much work remains to be done to provide farmers with a taking into account local data so they can forecast yields. in the maharashtra (india) region, the sugarcane cultivation life cycle (sclc) spans around 12 months, with plantation starting at three separate seasons. our method relies on past production data to train a supervised machine learning system and make sugarcane crop predictions. climate, production environments and agronomic aspects associated with agricultural management, such as variety selection, cane field age, fertilization, pest and disease control, and weed control, all influence sugarcane yield [2]. description of study area shirol taluka of kolhapur district is gifted by the presence of natural irrigation potential on account of five major rivers i.e. krishna,panchaganga,warana,dudhganga and vedganga [3]. soil type here is alluvial. normal rainfall is during june-october 1019.5mm. top three crops cultivated are sugarcane 113.9(‘000 ha), paddy rainfed 113.8 (‘000 ha) and groundnut 57.4(‘000 ha) [4]. india is the world's second-largest producer of sugarcane after brazil. sugarcane is grown in all of india's states and at various times of the year. in this study, we propose supervised machine learning based crop yield forecasting model for sugarcane as a principal crop in study area. crop analysis and agricultural production forecasting always relied on statistical models. models are applied on ten years production data of the sugarcane. three algorithms applied for sugarcane productivity prediction and five algorithms are applied for sugarcane yield prediction on ten years sugarcane production data from study area provided by shree dutta sugar factory, shirol. international journal of applied sciences and smart technologies volume 4, issue 2, pages 123–130 p-issn 2655-8564, e-issn 2685-9432 125 2 research methodology materials and methods: sugarcane is india's most important cash crop. it entails less risk, and farmers may be quite certain of a return even in difficult conditions. sugarcane is first crop of kolhapur district [4]. the sugarcane yield data, in tons of cane per hectare [5], originally available at the farmers and village gat number level. ten years data from the sugar mill which includes farmer name, gat number village, date of sowing and season area of sugarcane cultivated and production. knowing the size of the sugarcane harvest might assist industry members make better decision [2]. table 1. ten year sugarcane cultivation trend in study area season-year cultivated area total sugarcane production in ton 2010-2011 14556.33 1344688.952 2011-2012 13032.36 1229240.511 2012-2013 10824.94 1196219.045 2013-2014 11139.9 1191862.504 2014-2015 337.8816667 67610.66034 2015-2016 11425.21 1294479.054 2016-2017 15058.3365 1224696.921 2017-2018 10524.99 1187021.203 2018-2019 12118.64 1212491.125 2019-2020 11637.48 1047024.887 2020-2021 11272.86 1192268.53 figure 1. village wise sugarcane production agarbhag (shirol) akiwat ankali (sangali) arjunwad aurwad barwad borgaon chand-shiradwad chichwad (kolhapur) chinchwad international journal of applied sciences and smart technologies volume 4, issue 2, pages 123–130 p-issn 2655-8564, e-issn 2685-9432 126 from this we added column for productivity and village wise data created and applied machine learning algorithm for predicting the productivity of a particular village. table 1, shows season wise sugarcane cultivated area and sugarcane production from study area. the model's predictor variables productivity of village is calculated on a yearly basis. regression analysis is a basic, technique for modeling the relationship between one or more independent or predictor variables and a dependent or response variable that we want to forecast, and it is one of the tools available in statistical analysis literature [5]. sugarcane production in study area is summarized in table 1. dataset description following figure 2. shows the sample dataset which is recorded by sugar factory. year wise production also visualised in figure 3. for applying machine learning algorithm for yield some of the columns are removed which are less correlated. the features shown in figure 4 are used for training ml regression models after doing the pre-processing such as converting categorical variable in to numerical we get 57495 rows x 11 columns . dataset is further divided into 80% training and 20 % testing . performance of model discussed in results section. figure 2. sample recorded data international journal of applied sciences and smart technologies volume 4, issue 2, pages 123–130 p-issn 2655-8564, e-issn 2685-9432 127 figure 3. yearly production figure 4. features used for modeling from the data provided another dataset created which is village wise yearly cultivated area and village wise sugarcane production and productivity of each village calculated per unit area production and machine learning models are trained and tested on this created dataset as well for forecasting productivity of particular village. productivity compared with national level productivity and state level productivity to get further insights. in both cases climatic, nutrient supply, soil fertility status such parameters are not taken into consideration, which can taken and accuracy would be improved. 3 results and discussion crop forecasting is the science of estimating crop yields and production ahead of time, usually a couple of months ahead of time. a crucial part of crop production international journal of applied sciences and smart technologies volume 4, issue 2, pages 123–130 p-issn 2655-8564, e-issn 2685-9432 128 forecasting is defining the time horizon in terms of time series forecasting approaches. this study included three algorithm for productivity prediction random forest (rf), boosting (gbm), and xgboost which are the most commonly used for agricultural modeling[6]. we tried five different algorithms for modeling yield [4] performance is as shown in table 2. performance is not great because there are extrinsic parameters as well which impact on production of crop such as climate, rainfall, soil fertility , management skill and so on which are not considered in the current study. table 2. sugarcane yield prediction model performance machine learning algorithm accuracy linear regression 62% random forest 65% xgboost 66% gradient boost 63% decision tree 63% for sugarcane productivity prediction only village wise area cultivated and production of sugarcane for that particular season is used and target variable is productivity. in this modeling climate data , rainfall , soil quality not considered , parameters used are how much area cultivated, which type of breed, when it is planted, what type of water supply and when crop is taken. average sugarcane productivity of india is 70-80, average sugarcane productivity of maharashtra 80.72, [r20], whereas average productivity of the study area is 95. random forest repressor gives 65% accuracy and other two xgboost and gradient boosting gave 66% accuracy. when opposed to using a single data model to predict a response, using many model can improve the robustness and accuracy of predictions. 4 conclusion the goal of this study was to see if a machine learning approach could provide fresh insights about sugarcane productivity in western maharashtra. predicting crop international journal of applied sciences and smart technologies volume 4, issue 2, pages 123–130 p-issn 2655-8564, e-issn 2685-9432 129 production may help sugar mills to boost industry revenues by implementing more effective and focused forward selling tactics and logistics planning. the methodology described in this research can readily be applied to other sugarcane-growing regions and agricultural businesses around the world to improve agricultural methods. sugarcane productivity prediction results demonstrated that the prediction accuracy of the machine learning algorithm is quite promising. acknowledgements the authors are highly grateful to shree dutta sugar factory, for providing necessary data to carry out this research and thankful to thadomal shahani engineering college, bandra as well as xavier institute of engineering, mumbai, india. references [1] p. mishra, m. a. g. a. khatib, i. sardar, j. mohammed, k. karakaya, a. dash, m. ray, l. narsimhaiah, a. dubey, “modeling and forecasting of sugarcane production in india”, sugar tech, 23(6), 1317-1324, 2021. [2] l. a. monteiro and p. c. sentelhas, “sugarcane yield gap: can it be determined at national level with a simple agrometeorological model?”, crop and pasture science, 68(3), 272-284, 2017. [3] i. maharashtra cell, “agriculture contingency plan for district: kolhapur”, icar_crida_nicra, 2019. [4] y. everingham, j. sexton, d. skocaj, g. inman-bamber, “accurate prediction of sugarcane yield using a random forest algorithm”. agron. sustain. dev. 36. 27. springer verlag/edp sciences/inra. 2016. [5] r. g. hammer, p. c. sentelhas, j. c. mariano, “sugarcane yield prediction through data mining and crop simulation models”. sugar tech, 22(2), 216-225. 2020 [6] r. s. kodeeshwari and k. t. ilakkiya, “different types of data mining techniques used in agriculture-a survey”. international journal of advanced engineering research and science, 4(6), 237191. 2017. international journal of applied sciences and smart technologies volume 4, issue 2, pages 123–130 p-issn 2655-8564, e-issn 2685-9432 130 [7] shree datta shetkari s.s.k. ltd., shirol. available at: http://dattasugar.co.in/ international journal of applied sciences and smart technologies international journal of applied sciences and smart technologies volume 2, issue 2, pages 209–216 p-issn 2655-8564, e-issn 2685-9432 209 tourism site recommender system using item-based collaborative filtering approach r.a. nugroho 1,* , a.m. polina 1 , y. d. mahendra 1 1 department of informatics, faculty of science and technology, sanata dharma university, yogyakarta, indonesia * corresponding author: robertus.adi@usd.ac.id (received 21-11-2020; revised 23-11-2020; accepted 25-11-2020) abstract many people like traveling. however, often it is difficult for them to find a tourism site that they like much. too many information about tourism is the problem. to overcome this problem, we need to filter the information. recommender system could filter the information. by considering the advantages, the system used item-based collaborative filtering approach to give recommendation. some tourism sites around daerah istimewa yogyakarta province were used in this research. the system is able to give recommendation to users. the accuracy of the rating prediction is 0,6293 and the average time consumption is 1693,33 milliseconds. keywords: recommender system, tourism, collaborative filtering 1 introduction traveling has become a lifestyle for indonesian people today. various tourist destinations opened and managed to attract tourists. one of the provinces that is always attractive as a tourist destination is yogyakarta. many tourists visit this area. according to tourism statistics, from 2014 to 2018 there has been an increase in the number of tourists visiting yogyakarta [1]. various types of tourist destinations, including natural international journal of applied sciences and smart technologies volume 2, issue 2, pages 209–216 p-issn 2655-8564, e-issn 2685-9432 210 tourism, artificial tourism, and cultural tourism can be found here. sometimes these attractions are not detected by tourists. it is not that there is no information, but too much information about tourism sites is the cause. too much information make tourists have difficulty finding interesting objects for them. to overcome this problem, we need to reduce the information given to the user. only relevant information is provided to its users (tourists). the recommender system is a system that is able to suggest items that users like [2]. this system is able to provide information according to the user's preferences. various fields have implemented recommender systems as a solution in filtering the information that will be provided to users. in the field of tourism, a recommender system is needed, especially in reducing the number of information of tourism sites that will be provided to users [3]. in the recommender system, there are two approaches that are commonly used, namely collaborative filtering and content-based filtering [4]. collaborative filtering tries to predict what users like by comparing user profiles with one another. in collaborative filtering, information about user’s preferences are very important. if there is too little information about user’s preferences, the system will have a cold-start problem so it cannot predict well. meanwhile, content-based filtering predicts what users like now by looking at what users liked in the past. the system will look for similarities between the content of objects and the user profile. the more similar the object is, the more recommended it is to users. collaborative filtering consists of user-based and item-based approaches [4]. both require information about the preferences of users. the differences are the user-based looks at the relationship between users, while item-based looks at the relationship between items [5]. relationships between items are considered more static (do not change much) than relationships between users. this results in less computational burden in providing recommendations. therefore, in this research, we proposed a system for recommending tourism sites using item-based collaborative filtering approach. international journal of applied sciences and smart technologies volume 2, issue 2, pages 209–216 p-issn 2655-8564, e-issn 2685-9432 211 2 research methodology this research used 10 tourism sites in yogyakarta, indonesia. the chosen tourism sites were popular or widely visited tourism sites in yogyakarta according to the yogyakarta tourism statistics [1]. table 1 shows the tourism sites used in this research. table 1. tourism site list no tourism site abbreviation 1 museum tni au dirgantara mandala mtau 2 monumen jogja kembali mjk 3 tebing breksi tb 4 kraton ratu boko krb 5 museum benteng vredeburg mbv 6 taman sari ts 7 kraton yogyakarta ky 8 de mata art museum dma 9 taman pintar tp 10 candi prambanan cp cold-start problem is a condition when we do not have enough ratings related to items [6]. to avoid cold-start problem, we first collected some ratings from tourists for these sites through a survey. the survey involved five respondents. the rating range given by tourists were 1 to 5 (1; 1,5; 2; 2,5; 3; 3,5; 4; 4,5; 5). table 2. user-item matrix mtau mjk tb krb mbv ts ky dma tp cp user 1 5,00 5,00 4,00 4,00 5,00 5,00 5,00 4,00 user 2 3,50 4,00 3,50 4,00 3,00 4,50 user 3 1,00 2,50 2,50 2,50 3,50 1,00 2,00 3,50 user 4 4,00 4,00 4,00 5,00 3,00 5,00 5,00 5,00 user 5 3,50 4,00 4,00 4,00 3,00 3,00 4,50 3,50 4,00 5,00 higher value of ratings indicates that the tourist is more interested with the tourism site. after the rating was obtained, a user-item matrix was formed. this matrix shows the rating given by tourists (users) to certain tourism sites (items). in the user item matrix, some cells appear empty (see table 2). this means that the tourist did not give a rating for a tourism site. not giving a rating because tourists had never visited these tourism sites. after the user item matrix was formed, the process was continued by looking for similarity between item which the rating will be predicted to all items in the matrix. international journal of applied sciences and smart technologies volume 2, issue 2, pages 209–216 p-issn 2655-8564, e-issn 2685-9432 212 only co-rated cases (the users rated both item and ) was used in the calculation (see figure 1). i1 i2 … ii ij … … in-1 in u1 r r u2 r u3 r u4 r r u5 r r figure 1. finding similarity between items to calculate the similarity, this research used pearson corellation as follows pc = ∑ ̅ ̅ √∑ ̅ ∑ ̅ (1) from equation (1), it is denoted is a similarity value between item and item ; dan are ratings given by user and to item while, ̅ and ̅ are the average ratings of item and . by considering the similarity value between items, top-neighbors were chosen. these neighbors were used to predict the rating that the active user would give to an item. the predictive rating is calculated by ∑ ( ) ∑ | | (2) 𝑆 𝑖 𝑗  similarity item 𝑖 and item 𝑗 international journal of applied sciences and smart technologies volume 2, issue 2, pages 209–216 p-issn 2655-8564, e-issn 2685-9432 213 from equation (2), it is denoted is predicted rating given by user to item ; is a rating given by user to item ; while, is similarity value between item and item . based on this predictive rating, the system considered to recommend it to user or not. the higher the predictive rating leads to the greater the chance that the item will be recommended to a user. in general the recommendation process that use item-based collaborative filtering approach can be depicted in figure 2 [7]. this research followed this process. figure 2. recommendation process at the end, the quality of the system will be evaluated by measuring the accuracy of the predictive rating and measuring the time consumption of the predicting process. 3 results and discussions the system is evaluated by measuring the magnitude of the error rate in predicting the rating given by the user for a tourist site. to measure the level of error prediction, we use mae (mean absolute error). system evaluation is carried out with several scenarios. the scenario is to use several numbers of the nearest neighbors (top neighbor) in predicting the ratings. the number of nearest neighbors that we used are top 4 neighbors, top 6 neighbors, and top 8 neighbors. we choosed those number of nearest neighbors because we only involved 10 tourism sites. table 3. evaluation results top neighbors mae time consumption (miliseconds) 4 0,6334 1654 6 0,6254 1797 8 0,6291 1629 dataset ( user item matrix) find similarity between items select top n neighbor (item) predict the rating recommend item international journal of applied sciences and smart technologies volume 2, issue 2, pages 209–216 p-issn 2655-8564, e-issn 2685-9432 214 from the results of the evaluation (see table 3), it can be seen that the smallest error rate occurs when using the top 6 neighbors, which is 0,6254 (see figure 3). however, the differences in error rates in the three scenarios are not significant. figure 3. mean absolute error in addition, we also measure the time consumption required by the system to complete the recommendation process. from the results of test, it can be seen that the top 6 neighbors require the highest time consumption (see figure 4), which is 1797 milliseconds. the differences in time consumption between one scenario and another is not significant. figure 4. time consumption 0.62 0.622 0.624 0.626 0.628 0.63 0.632 0.634 4 6 8 m a e top n neighbor mae 1500 1550 1600 1650 1700 1750 1800 1850 4 6 8 t im e ( m s) top n neighbor time consumption (milliseconds) international journal of applied sciences and smart technologies volume 2, issue 2, pages 209–216 p-issn 2655-8564, e-issn 2685-9432 215 4 conclusions from the experimental results, it can be concluded that the tourism site recommendation system is able to provide recommendations to users quite well. the item-based collaborative filtering approach is able to predict the rating that given by users with an average mae of 0,6293 and an average time consumption of 1693,33 milliseconds. the weakness of this research is the small number of users and tourism sites involved. in future, it is necessary to involve more users and tourism sites so that the scalability of the system can be measured properly, especially regarding the computational load. to improve accuracy, it is necessary to implement another similarity function. references [1] statistik kepariwisataan. dinas pariwisata daerah istimewa yogyakarta, 2018. [2] f. ricci, l. rokach, and b. shapira, “introduction to recommender systems handbook.” in recommender systems handbook, f. ricci, l. rokach, b. shapira, and p. b. kantor (eds.) boston, springer, 1–35, 2011. [3] d. gavalas, c. konstantopoulos, k. mastakas, and g. pantziou, “mobile recommender systems in tourism.” journal of network and computer applications, 39, 319–333, 2014. [4] c. desrosiers and g. karypis, “a comprehensive survey of neighborhood-based recommendation methods.” in recommender systems handbook, f. ricci, l. rokach, b. shapira, and p. b. kantor (eds.) boston, springer, 107–144, 2011. [5] b. sarwar, g. karypis, j. konstan, and j. riedl, “item-based collaborative filtering recommendation algorithms.” in proceedings of the 10th international conference on world wide web, new york, 285–295, 2001. [6] s. jain, a. grover, p. s. thakur, and s. k. choudhary, “trends, problems and solutions of recommender system.” in international conference on computing, communication automation, 955–958, 2015. [7] k. falk, practical recommender systems, manning publications, 2019. international journal of applied sciences and smart technologies volume 2, issue 2, pages 209–216 p-issn 2655-8564, e-issn 2685-9432 216 this page intentionally left blank international journal of applied sciences and smart technologies international journal of applied sciences and smart technologies volume 3, issue 2, pages 153–160 p-issn 2655-8564, e-issn 2685-9432 153 text classification on tamil omprakash yadav 1 , alcina judy 1 , praveen d’souza 1 , calvin galbaw 1,* , hinal rane 1 1 department of computer, xavier institute of engineering, mahim, mumbai 400016, india * corresponding author: calving2012@gmail.com (received 01-09-2020; revised 29-12-2021; accepted 29-12-2021) abstract by and large, we don't know to talk and read the territorial dialects that are spoken in our nation. so we have accepted tamil language as it is our territorial and numerous doesn't get it. in our task, the content in tamil language is stacked from wikipedia. it is then sifted through and extraordinary characters are evacuated it is then characterized by the titles like id, title, url, etc. it is then used to prepare the model utilizing cnn calculation and the dataset is created. along these lines, you would now be able to test utilizing an irregular wikipedia page and the content is grouped by the titles and anticipated. keywords: tamil text classification, feature classification, vocabulary set or bag-of-words, text mining, natural language processing 1 introduction for the most part, we don't comprehend huge numbers of the local dialects in our nation. so at whatever point an individual of various state language is spoken or composed, we were unable to get it. in this task, we characterize the content dependent on the sort like name, nation, id, and so on. here, we use cnn to arrange the content international journal of applied sciences and smart technologies volume 3, issue 2, pages 153–160 p-issn 2655-8564, e-issn 2685-9432 154 and train the dataset. it is useful for individuals to order the sort and in any event, get a thought of what the content looks like. it will be simple for the individual to know and recognize the various segments present in the information. the sort of information is helpful for various logical purposes for getting it. 2 literature survey we refer to references [1], [2], [3], [4], [5]. in deep learning, a convolutional neural system (cnn or convnet) is a class of deep neural systems, most usually applied to investigating visual features. these utilize the spatial loads of channels to extricate highlights from the picture. they have applications in picture and video acknowledgment, recommender frameworks, picture arrangement, clinical picture examination, regular language handling, and money related time arrangement. convolutional neural systems use convolutional layers as building squares to gain from the dataset. alongside these, pooling layers and completely associated layers are utilized. a convolution is the basic use of a channel to an info that outcomes in an activation. these channels slide over width and tallness to convolve the information and use actuation capacity to make a highlighted map. this guide can be passed to another convolutional layer to make an increasingly itemized map. these component maps can be unfurled to take care of into a completely associated layer to get the explicit prescient displaying issue, for example, picture arrangement. since information like pictures, recordings, and other multi-dimensional information have a quadratic number of highlights, an ordinary neural system needs to process a huge measure of straight capacities and enactments which takes a quadratic measure of time. be that as it may, convolutional organize registers each weight in a straight time utilizing channels. international journal of applied sciences and smart technologies volume 3, issue 2, pages 153–160 p-issn 2655-8564, e-issn 2685-9432 155 he outcome is profoundly explicit highlights that can be distinguished anyplace on input images [3]. 1. convolutional neural systems apply a channel to a contribution to make a component map that sums up the nearness of recognized highlights in the input [3]. 2. filters can be high quality, for example, line finders, yet the advancement of convolutional neural systems is to get familiar with the channels during preparing with regards to a particular forecast problem [3]. 3. how to figure the component map for one-and two-dimensional convolutional layers in a convolutional neural system [3]. for regular language handling tasks, counterfeit neural systems, for example, intermittent neural systems (rnn) and long transient memory systems (lstm) are favored because they go off past initiation or yield as a contribution to the following concealed states. this aids in recalling the word/character figured which helping i processing the following ward word. that is the reason these models are utilized most often in language models. since the attempted assignment is of order, convolutional neural systems are utilized which changes in input. instead of contributing a picture, word installing can be utilized as the info. word installing is made utilizing different models, for example, word2vec. since the forecast will be made on wikipedia information, we have made an installation on wikipedia pages. input layers: it's the layer where we contribute to our model. the quantity of neurons in this layer is equivalent to add up to estimate of the word implanting. hidden layer: for grouping utilizing word implanting, for the most part, a single layer of completely associated layers are utilized to shape the concealed layer. additionally, a single layer of convolutional layer followed by a completely associated layer can be utilized. international journal of applied sciences and smart technologies volume 3, issue 2, pages 153–160 p-issn 2655-8564, e-issn 2685-9432 156 output layer: since there are n number of words, the yield from the concealed layer is then taken care of into a calculated capacity of the softmax layer which changes over the yield of each word into the likelihood score of each class. the information is then taken care of into the model and yield from each layer is acquired this progression is called feedforward, we at that point figure the blunder utilizing a mistake work, some normal mistake capacities are cross-entropy, square misfortune blunder and so forth. from that point forward, we back engender into the model by figuring the subsidiaries. this progression is called backpropagation which fundamentally is utilized to limit the misfortune. 3 existing system natural language processing represents computational techniques used for processing human language. the language can either be represented in terms of text or speech. nlp in the context of deep learning has become very popular because of its ability to handle text which is far from being grammatically correct. the ability to learn from the data has made the machine learning system powerful enough to process any type of unstructured text. machine learning approaches have been used to achieve state of the art results on nlp tasks like text classification, machine translation, question answering, text summarization, text ranking, relation classification, and others. the focus of our work is text classification of tamil language. text classification is the most widely used nlp task. it finds application in sentiment analysis, spam detection, email classification, and document classification to name a few. it is an integral component of conversational systems for intent detection. there have been very few text classification works in literature focusing on the resource-constrained tamil language. while the most important reason for this is the unavailability of large training data; another reason is the generalizability of deep learning architectures to different languages. however, tamil is a morphologically rich and relatively free word order language so we investigate the performance of different models on the tamil text classification task. moreover, there has been a substantial rise in tamil language digital content in recent years. service providers, e-commerce industries are now targeting local languages to improve their international journal of applied sciences and smart technologies volume 3, issue 2, pages 153–160 p-issn 2655-8564, e-issn 2685-9432 157 visibility. an increase in the robustness of translation and transliteration systems has also contributed to the rise of nlp systems for tamil text. this work will help in the selection of the right models and provide a suitable benchmark for further research in tamil text classification tasks. 4 proposed methodology the proposed methodology is as follows. step 1: obtain the text from wikipedia for tamil pages go to https://ta.wikipedia.org/wiki/ _ from this extract the text and convert it into csv file this file is then taken for further processing. step 2: filtering and removal of special characters: the special characters and the ambiguity present in the text are removed such as comma, semicolon, asterisk mark, brackets and so on. this will help the text to be simplified for further processing of data. step 3: classify using titles the text is classified according to the titles such as id, name, title, url, recursive words etc. step 4: train the dataset using cnn the dataset is trained using convolutional neural networks (cnn) is one kind of feed forward neural network. cnn is an efficient recognition algorithm which is widely used in pattern recognition and image processing. it has many features such as simple structure, less training parameters and adaptability. step 5: test using random wikipedia page now we are able to test any random wikipedia page and the text is classified according to the titles and predicts the results. international journal of applied sciences and smart technologies volume 3, issue 2, pages 153–160 p-issn 2655-8564, e-issn 2685-9432 158 5 implementation figure 1. flowchart. the above figure 1 is the flowchart of our system. the working of our system is as follows: 1. the text from tamil wikipedia pages are extracted and checked for special characters. 2. such characters create problem while classifying that is these special characters are not important to be classified. 3. the text is classified according to the title, tags, key words, and what the text is about. 4. this is used as the dataset for the model to be trained on. 5. once we achieve high accuracy on the model, the user can use this model to get the details of an unknown tamil text such as titles, etc. international journal of applied sciences and smart technologies volume 3, issue 2, pages 153–160 p-issn 2655-8564, e-issn 2685-9432 159 6 conclusion in this report, we have introduced a tamil language text arrangement that encourages the client to distinguish the sort of text and create a dataset by expelling all the ambiguities in the content and preparing the dataset which will be useful to test any irregular wikipedia page. references [1] e. annamalai and s. b. steever. modern tamil in dravidian languages. newyork: routledge publication, 1999. [2] r. k. belew, “adaptive information retrieval.” in proceedings of the 12th annual international acm/sigir conference on research and development in information retrieval, ny, 11–20, 1989. [3] l. chanunya and r. peachavanish, “automatic thai language essay scoring using neural network and latent semantic analysis.” in proceedings of the first asia international conference on modeling and simulation, 2007. [4] c.h. li and s.c. park, “text categorization based on artificial neural networks.” in iconip, 4234, lncs 302–311, 2006. [5] c.h. li and s.c. park, “neural network for text classification based on singular value decomposition.” in seventh international conference on computer and information technology, 47–52, 2007. international journal of applied sciences and smart technologies volume 3, issue 2, pages 153–160 p-issn 2655-8564, e-issn 2685-9432 160 this page intetntionally left blank international journal of applied sciences and smart technologies international journal of applied sciences and smart technologies volume 2, issue 1, pages 59–66 p-issn 2655-8564, e-issn 2685-9432 59 comparison simulation analysis of the gradual summation of a function with recognition of direct extrapolation via in series stephanus ivan goenawan department of industrial engineering, atma jaya catholic university, cisauk bsd highway, tangerang, indonesia corresponding author: steph.goenawan@atmajaya.ac.id (received 26-06-2019; revised 05-08-2019; accepted 05-08-2019) abstract a set of data pattern that have the characteristic of a function can be approached using newton series simulation. when using the newton series, the simulation result of the summation of extrapolation data can be gradually done in step by step. however, if using the in series, the simulation result of the summation of extrapolation data doesn’t need to be gradually done in step by step. the methodology in this research compares the data pattern from the simulation results of the high degree function with the results of interpolation and extrapolation from the results of the in series simulation. furthermore, will be carried out comparison simulation analysis of the summation of the function values gradually with the extrapolation function directly using the in series. the result of comparison simulation analysis point out that the value of the total summation of data between the gradual and the direct summation is the same, so that the direct sum technique using the in series is able to make the total sum process more efficient. keywords: in series, newton series, extrapolation, interpolation international journal of applied sciences and smart technologies volume 2, issue 1, pages 59–66 p-issn 2655-8564, e-issn 2685-9432 60 1 introduction the in series, namely ivan newton series (according to hki ec00201856522), is a further development of the newton series [1]. a function that is built from a newton series must have function basis which has a form ∏ , where the value of n is a positive integer [2]. if the numbers of the function that has been successfully built by the newton series are summed gradually, of course there will be inefficient constraints because it will require repetitive summation processes carried out sequentially so that it can take longer than without the repeated summation process. therefore the sum of extrapolation results can later be obtained in a more efficient way, so a new series has been developed, namely ivan newton series or abbreviated as in series [3]. this in series is generated from the function basis of multilevel series of in one degree    t j iu j 1 1 where the values of and are positive integers. of course if the function data is not summed, so the form of the function basis will be similar to the function basis of the newton series. its function base if we have )( )!1( 1 0 ix n n i     . because this function base equation has similarities and capabilities similar to the newton series, which is able to interpolate and extrapolate the data, so this series is named ivan newton series. 2 basic theory before discussing the in series, it will be explained about multilevel series of in one degree which plays an important role as the function basis of the in series [4]. the multilevel series of in one degree is the series of in one degree which is repeated again in addition, with an initial limit of one [5]. to be clearer, this below is a definition of the notation of the multilevel summation, with and is a positive integer, where and . international journal of applied sciences and smart technologies volume 2, issue 1, pages 59–66 p-issn 2655-8564, e-issn 2685-9432 61 )....321(....)321()21(1 .....321 11 2 11 1 1 0 tii tii ti t j j i t i t i t i t i          (1) furthermore, in general it can be obtained [4],           11 u ut i t i u (2) the form of the in series [3] which uses the function basis of the multilevel series of in one degree equation (4) is ).,,(......................),,3( ),,2(),,1(),,0( ),,()( 3 210 01 tugbtugb tugbtugbtugb tulgbif l l t i u          (3) where             iu tiu jtuig t j iu 1 ),,( 1 1 (4) the smallest function base if the value is         u ut tug 1 ),,0( (5) in the formula of the in series equation (3), the value of the variable t is a positive integer for value , but if the the range for the variable value changes to a real number, the formula is the same as in the newton series. the formula form of the in series which the results are not summed with the value and the difference factor between discrete data in the equation (3) is . .! )( ............. !3 )2()1( !2 )1( )()( 1 0 3 210 1 0                k t i kt b ttt b tt btbbtfif (6) or international journal of applied sciences and smart technologies volume 2, issue 1, pages 59–66 p-issn 2655-8564, e-issn 2685-9432 62         1 1 0 0 ! )( )( j j k j j kt bbtf (7) with the value of . by using the interpolation method of the in series, the constants that compose the in series equation (3) can be generated [6], that is: )0( 0 fb  ,  01 )1( bfb  (8) and                                      1 0 2 0 1 2 0 3 1 0 210 )( ! )!1( )( ........................ !3 )( !2 )( )()( l k l k l kk l kl l l kl b kl b kl blbblf b (9) or                           1 0 1 0 1 1 0 )( .! )!1( )( .)( l k j k j l j l kl l j kl bblfb (10) where the value range is positive integer numbers, . 3 results and discussions from equation (9) and (10), an algorithm for interpolation can be made using the in series. an example of the initial polynomial data that will be used for interpolation is to find the values of constants in the in series in table 1 derived from the polynomial function equation (11). the data in this table 1 are derived from the polynomial function that will be used for the direct sum extrapolation test in the table 3, that is: 10 10 8 8 6 6 4 4 2 2 )4.11(10742 1)( xxxxx xxf  (11) international journal of applied sciences and smart technologies volume 2, issue 1, pages 59–66 p-issn 2655-8564, e-issn 2685-9432 63 with a range of abscissa from to , a representation of the polynomial function can be described through figure 1. table 1. interpolation polynomial data in the in series figure 1. graph of a polynomial function equation after using the interpolation formula of the in series, equation (8) and (9), can be generated the constants in each basis of the in series function as shown in table 2. international journal of applied sciences and smart technologies volume 2, issue 1, pages 59–66 p-issn 2655-8564, e-issn 2685-9432 64 table 2. constant values on the in series function generating of the in series function can be done after the constant values in table 2 are obtained. then the data extrapolation test and summation are carried out, summation in level one directly from the interpolated polynomial function using the excel. the results are shown in table 3. table 3. comparison test of extrapolation data and direct summation using the in series t data from the actual polynomial function gradual data summation of function ekstrapolation data using in series summation of data directly using in series international journal of applied sciences and smart technologies volume 2, issue 1, pages 59–66 p-issn 2655-8564, e-issn 2685-9432 65 after obtaining the extrapolation data, it is necessary to compare the data result with the actual polynomial function, it turns out that from table 3 the same data results are obtained. furthermore, it is also necessary to compare the sum results of polynomial function data one by one with the sum results directly using the in series, it turns out that from table 3 the same results are also obtained. 4 conclusions a. ivan newton series or in series is a series formula that is more general than the newton series because it is able to direct summation of extrapolation data on polynomial functions. b. the basis of the function that compose the in series is the multilevel series of in one degree. c. interpolation simulation from in series can be used to extrapolate data well if the data characteristic is polynomial function. d. in the future the in series can be utilized in numerical methods. acknowledgements this work was supported by the lppm atma jaya catholic university. the author thanks the reviewers for their suggestions, which have improved the quality of this paper. references [1] i. newton, “philosophiae naturalis principia mathematica book iii,” london, 1687. [2] r. w. hamming, “numerical methods for scientists and engineering: 2nd,” dover publication, inc. new york, 1973. [3] s. i. goenawan, “deret in (ivan newton): formula efisien untuk jumlahan bertingkat data interpolasi atau ekstrapolasi,” hki ec00201856522, 2018. [4] s. i. goenawan, “deret bertingkat berderajat satu dalam teori keteraturan,” metris, 4 (1), 50 56, 2003. international journal of applied sciences and smart technologies volume 2, issue 1, pages 59–66 p-issn 2655-8564, e-issn 2685-9432 66 [5] s. i. goenawan, “teori keteraturan,” atma nan jaya, jakarta, 1998. [6] s. i. goenawan, “deret garis bertingkat dalam teori keteraturan,” metris, 3 (3), 50 57, 2002. international journal of applied sciences and smart technologies international journal of applied sciences and smart technologies volume 4, issue 1, pages 47-58 p-issn 2655-8564, e-issn 2685-9432 47 weft computation of endek weaving nyoman dewi pebryani1,*, putu manik prihatini1, tjok istri ratna c.s1 1 the indonesian institute of the arts denpasar, bali, indonesia *corresponding author: dewipebryani@isi-dps.ac.id (received 18-05-2022; revised 27-05-2022; accepted 29-05-2022) abstract endek is a textile produces in bali with a single ikat technique, specifically weft ikat. weft ikat means that the pattern is created or drawn on the weft threads before ikat or tying process. the weft threads are transferred into a frame; a frame consists of tens to hundreds of bundles or called traditionally as bulih. drawing a pattern on a frame requires special expertise as the pattern maker has to translate a two-dimensional pattern into a shape that is distorted on the wide side. indirectly, this special requirement confines the pattern maker as they have to visualize a distortion shape to be able to draw in the frame. to provide easiness in design exploration, providing various templates and multiplier to automatically distort the template are substantial. therefore, understanding the manual process on site is important before simulating the formula of weft computation including are templates and multiplier. with this computation, the pattern makers or anyone who has an enthusiast in designing endek patterns may involve in the design process. keywords: bulih, computation, endek, pattern, weft this work is licensed under a creative commons attribution 4.0 international license http://creativecommons.org/licenses/by/4.0/ international journal of applied sciences and smart technologies volume 4, issue 1, pages 47-58 p-issn 2655-8564, e-issn 2685-9432 48 1 introduction traditional textile is created through a process of weaving, where the warp and weft threads intersect in a loom. endek as one of the balinese traditional textiles is produced with the weft ikat technique. according to schaublin et al “ikat (indonesian “bundle,” mengikat “to tie”) is a complicated and time-consuming resist dye technique in which undyed yarns are mounted on a frame in bundles” [1]. the pattern that appears in endek is created on the weft threads. to create a pattern on the weft threads, the pattern maker needs to have proficiency in visualizing two-dimensional shapes into distorted shapes on the wide side. in addition to that, the pattern maker also needs to decide the number of round threads or bundles to set in the frame (called penamplikan traditionally). the number of bundles (called bulih traditionally) is varied from one design to another, mostly between fifty to hundreds of bulih in one frame. the expertise of creating the pattern in the weft threads passes from generation to generation verbally and is not well documented. to preserve this cultural creation, transforming it into digital formats is essential. defined according to the guidelines for the preservation of digital heritage as “texts, databases, still and moving images, audio, graphics, software, and web pages, among a wide and growing range of formats” [2]. there are challenges in keeping this digital heritage usable and available, especially for the community that owns the tradition. today, globalization poses significant challenges to the survival of many traditions, one of which is traditional forms of craftsmanship design. young people in communities may find the required, sometimes-lengthy apprenticeships too demanding, though they are necessary to learn the many traditional forms of the craft. this knowledge may disappear if family or community members are not interested in learning them. the process of transforming an oral tradition into a digital form involves careful decoding to avoid misinterpretation. according to pebryani, “the resulting cultural creation produced is based on the cultural knowledge owned by the local people in a specific area” [3]. this cultural knowledge entails indigenous algorithms consisting of the grammar and computation used by the artisans to create their textile. hence, investigating the pattern design and computation of the weft threads in the process of international journal of applied sciences and smart technologies volume 4, issue 1, pages 47-58 p-issn 2655-8564, e-issn 2685-9432 49 designing endek will provide complete documentation on endek weft ikat design process. 2 research methodology the research methodology used in this study is to explore and explain the cultural knowledge and indigenous algorithm in the process of patterning endek textiles. the methodology includes descriptive and simulation, meaning the study is divided into several stages. according to sommer, “the steps involved in conducting a simulation include formulating the model, simulating the event, and analyzing the results” [4]. therefore, as shown in figure 1, this study is divided into three stages: (1) investigating endek textile patterns on-site, (2) simulating the weft computation, (3) assessing the weft computation into the actual threads. figure 1. stages in understanding the weft threads computation the data was collected from interviews and participant observations with five pattern makers and weavers from various weaving centers. the participant observation allowed the researcher to see and infer information that people may not have mentioned during an interview. the researcher made appointments with participants, including the pattern makers and weavers. both the interviews and observations took place while participants worked and lasted 60 minutes each. the researcher observed the activities of the participants, including pattern making, the steps involved in creating the patterns, and other phases of the process. the data achieved from the on-site investigation is international journal of applied sciences and smart technologies volume 4, issue 1, pages 47-58 p-issn 2655-8564, e-issn 2685-9432 50 necessary to be used as a guideline to conduct the second stage to avoid misinterpretation in decoding the patterns design of endek textile. as well as the information from the first and second stage are significant to be used in the assessment stage. 3 results and discussion the data from the site was analyzed and discussed in three sub-topics: investigation, simulation, dan assessment. investigation discusses how the process of creating endek on-site including the process of counting threads and patterns. simulation examines formula of bundles on frame related with the pattern on endek textile. assessment tests the formulas into the actual endek design process. investigation. creating endek textiles follows several steps. warp and weft threads are treated with a different procedure. procedure for warp threads: coloring or dyeing the warp threads, putting the warp threads into a non-machine weaving loom. according to pebryani, “the treatment and procedure of the warp and weft yarns in the endek weaving consist of approximately 14 stages” [5]. procedure for weft threads as shown in figure 2, consists of splitting weft threads, transferring weft threads into a frame, drawing patterns on the weft threads, tying the weft threads, dyeing weft threads, coloring the second and third different colors on the weft threads. after both threats are processed, then both are ready to weave with a non-machine weaving loom as shown in figure 2.e. (a) (b) international journal of applied sciences and smart technologies volume 4, issue 1, pages 47-58 p-issn 2655-8564, e-issn 2685-9432 51 figure 2. (a) splitting weft threads; (b) transferring weft threads into a frame, (c) drawing pattern on a weft threads frame, (d) tying patterned weft threads frame, (e) coloring weft thread frame, (f) weaving on a non-machine weaving loom the weft threads are stored on a frame prior to drawing the pattern on the threads. a frame consists of several bulih, and a bulih contains several round threads. one round thread consists of 28 to 30 strands of yarn. the number of bulih is vary depending on the desired pattern to be created, approximately around fifty to hundreds of bulih in one frame. an experienced pattern maker has already computed the number of bulih during the process of transferring the weft, based on the pattern which the pattern maker desired. hence, the pattern design needs to be decided before transferring weft threads to the frame. figure 3. shows the size of the frame, 108 x 100 centimeters. in the width size of 108 cm, 7 cm and 1 cm on both left and right of the frame are reduced, leaving 92 cm for the drawing area. the distance between bulih is 1 cm. (c) (d) (e) (f) international journal of applied sciences and smart technologies volume 4, issue 1, pages 47-58 p-issn 2655-8564, e-issn 2685-9432 52 figure 3. a frame of weft threads as shown in figure 3, the pattern maker draws in the area of 92 cm x 100 cm. after the frame is prepared, the pattern maker draws helper lines vertically every 2 cm or so. these lines will assist the pattern maker in dividing space or area on the weft threads. the drawing process uses a marker in navy or red color. the downside of this process is difficult to undo or erase lines that have been created. hence, not many junior pattern makers are able to explore new designs in the weft threads. they mostly follow images or patterns they have already drawn before. designing a pattern for endek in frame is different compared to designing a pattern in a template. when designing a pattern in frame, the pattern maker has to visualize a two-dimensional shape into a distortion shape on the wide side. it limits the pattern maker during design exploration. to provide easiness for beginner pattern makers or experienced pattern makers, transferring this knowledge into a digital format is essential. simulation. paulus gerdes using the study of mathematics in culture found that the craft art created by the indigenous people involves calculations (or hidden logic) that international journal of applied sciences and smart technologies volume 4, issue 1, pages 47-58 p-issn 2655-8564, e-issn 2685-9432 53 indigenous people pass on from generation to generation [6]. thus, to understand indigenous algorithms on weft computation is through studying and involving in local activity of designing endek textile. knowledge gathered across interviews and participant observations from several pattern makers and weavers are simulated to acquire a formula for the template as well as bulih computation. one frame consists of tens to hundreds bulih, in one bulih consist of several bundles (where one bundle consists of two-time thread pulling), bundles traditionally called “as”. one-time thread pulling consists of 28 strands of yarns, where 28 strands of yarns create a 1 cm length of textiles. therefore, 28 becomes a divider in order to seek a multiplier as shown in table 1. the formula for the multiplier is: 1. 𝑀𝑢𝑙𝑡𝑖𝑝𝑙𝑖𝑒𝑟 = 28 𝑏𝑢𝑛𝑑𝑙𝑒𝑠 𝑥 2 table 1. multiplier computation bundles (as) multiplier 2 7 3 4.6 4 3.5 5 2.8 the template for designing the pattern has a fixed height with distinctive widths. as shown in figure 4.a., a design created on a template with a size of 92 x 28 cm. then in figure 4.b, the pattern’s design in figure 4.a. is transformed into a weft threads frame. to identify the size of designated weft threads frame, the width size of a template times with the multiplier from the bundles that preferred. as shown in figure 4.b, the width size is altered from 28 to 98, that comes from 28 times to 3.5 (multiplier from bundles four). international journal of applied sciences and smart technologies volume 4, issue 1, pages 47-58 p-issn 2655-8564, e-issn 2685-9432 54 figure 4. (a) left: endek’s pattern design in a template; (b) right: endek’s pattern design in a frame 2. 𝐵𝑢𝑙𝑖ℎ = 𝑚𝑢𝑙𝑡𝑖𝑝𝑙𝑖𝑒𝑟 𝑥 𝑡ℎ𝑒 𝑤𝑖𝑑𝑡ℎ 𝑜𝑓 𝑡ℎ𝑒 𝑡𝑒𝑚𝑝𝑙𝑎𝑡𝑒 subsequently, knowing the number of bulih and the bundles can be used to compute the length of endek textiles (as shown in table 2) with the formula as follow: 3. the textile length = 30 28 𝑥 𝑏𝑢𝑙𝑖ℎ x (bundles x2)𝑥 2 table 2. the width, bulih, and the result the width of the template bulih for as 4 bulih for as 3 bulih for as 2 result 28 cm 98 128 126 1680 cm 26 cm 91 120 182 1560 cm 24 cm 84 110 168 1440 cm international journal of applied sciences and smart technologies volume 4, issue 1, pages 47-58 p-issn 2655-8564, e-issn 2685-9432 55 22 cm 77 101 154 1320 cm 20 cm 70 92 140 1200 cm 18 cm 63 83 126 1080 cm 16 cm 56 74 112 960 cm 14 cm 49 64 98 840 cm 12 cm 42 55 84 720 cm 10 cm 35 46 70 600 cm … … … … … assessment. theorems need to be presented through illustration or graphs in order to see how the theorems or formulas have been proven [7]. the frame as shown in figure 4.b. is printed with a scale of 1:1 and created holes on it as a mold to allow ink to permeate on the weft threads frame as shown in figure 5.a. the signed ink on the weft threads frame then is tied as shown in figure 5 b. figure 5. (a) left: the weft threads frame that have been patterned; (b) right: the weft threads frame that the patterns have been tied after the weft threads have been tied according to the patterns, the next process is coloration basic dye followed by desired colors. later, the threads are split into a international journal of applied sciences and smart technologies volume 4, issue 1, pages 47-58 p-issn 2655-8564, e-issn 2685-9432 56 shuttle. this shuttle of weft threads passes the warp threads horizontally or from right to left and reverse, until an endek textile is materialized as shown in figure 6. figure 6. endek textile pattern as the result from the design on the left side 4 conclusion the computation of weft threads provides easiness in designing the patterns for endek textiles by providing various template dimensions. in addition to that, with the multiplier, the template can automatically stretch on the wide side to transform it into bulih. the endek design pattern created through this computation has been assessed until it created a textile. the computation of weft threads contributes a benefit in transforming the manual process into a digital process in designing endek patterns as well as encouraging the pattern maker more explorative. acknowledgements this research is supported by the ministry of education, culture, research, and technology and the education fund management institute (lpdp) through the 2021 applied science research program funding program. also, we would like to express international journal of applied sciences and smart technologies volume 4, issue 1, pages 47-58 p-issn 2655-8564, e-issn 2685-9432 57 our gratefulness to astiti weaving center which are willing to be partners in this research process. references [1] schaublin, kartaschoff, & ramseyer. balinese textile, british museum press, london, 1991. [2] national library of australia. guidelines for the preservation of digital heritage. 2003. retrieved from https:// unesdoc.unesco.org/ark:/48223/pf0000130071 [3] author. culturally specific shape grammar: formalism of geringsing textiles patterns through ethnography and simulation, all dissertations, 2387. https:/tigerprints.clemson.edu/all_dissertations/2387 [4] sommer b & sommer r. a practical guide to behavioral research: tools and techniques. oxford university press. 1991. [5] author. digital transformation in endek weaving tradition. mudra jurnal seni budaya, 37(1), 78-85. [6] gerdes, p. reflections on ethnomathematics. for learning of mathematics, 14(2), 19-22, 1994. [7] herawati, m.v.a., henryanti, p.s., aditya, r. indentity graph of finite cyclic groups. international journal of applied sciences and smart technologies, 3(1), 101-110. international journal of applied sciences and smart technologies volume 4, issue 1, pages 47-58 p-issn 2655-8564, e-issn 2685-9432 58 this page intentionally left blank international journal of applied sciences and smart technologies international journal of applied sciences and smart technologies volume 4, issue 2, pages 141–148 p-issn 2655-8564, e-issn 2685-9432 141 dijkstra algorithm implementation in determining the shortest route of industrial gas distribution in pt tira austenite tbk cikarang with python programming language bernadetha nathalia fitra soverenty1, cyrenia novella krisnamurti1,* 1department of mathematics education, sanata dharma university, yogyakarta, indonesia *corresponding author: cyrenianovella@usd.ac.id (received 23-06-2022; revised 27-09-2022; accepted 28-09-2022) abstract mathematics is used in various fields of human life, one of them is in the industrial field. in the field of industrial mathematics can be used in finding the shortest route in distributing industrial gas. finding the shortest route was done using dijkstra's algorithm and developing the python programming language. the purpose of this study is to determine the shortest route on industrial gas distribution at pt tira austenite tbk cikarang using the python programming language. this type of research is applied research. the results obtained from this study are the python programming language that can be used to find the shortest route of industrial gas distribution and users only enter the starting point and endpoint data on the program. keywords: shortest route, industrial gas distribution, dijkstra algorithm, python this work is licensed under a creative commons attribution 4.0 international license http://creativecommons.org/licenses/by/4.0/ international journal of applied sciences and smart technologies volume 4, issue 2, pages 141–148 p-issn 2655-8564, e-issn 2685-9432 142 1 introduction mathematics can be used in various areas of human life. the use of mathematics starts from simple things, such as home numbering, to complicated things, such as the application of mathematics to other sciences. one of the areas that use mathematics is the industrial field. based on law no. 5 of 1994 on industry, industry is an economic activity that manages raw materials, semi-finished goods, and finished goods into highvalue goods of their use. over time, the industrial field in indonesia also began to develop. industrial gases are one form of industrial development and the main gases available are oxygen (o2), nitrogen (n2), carbon dioxide (co2), argon (ar), hydrogen (h2), helium (he), and acetylene (c2h2)[1]. industrial gas is widely used by various fields such as hospitals, universities, or other companies providing goods resulting in an increasing need for industrial gas. therefore, there are various companies that offer industrial gas and one of them is pt tira austenite tbk cikarang. pt tira austenite tbk cikarang sells, produces, and distributes industrial gas. the company certainly expects all activities to run efficiently. activities that are usually carried out by the company in distributing its products by delivering products to 4 to 5 consumers based on the same direction in one trip. distribution activities can run more efficiently by searching for the shortest route to 5 consumers. the process of finding the shortest route can indicate that mathematical science is applied in human life. finding the shortest route on pt tira austenite tbk cikarang which includes open companies can help in auditing the company's finances to estimate the fuel costs incurred. the process of finding the shortest route can be done with a graph that will represent the travel map. the graph consists of 2 infinite sets, namely a blank set of dots (v(g)) and a set of lines (e(g)) [2]. based on the type of line, the graph is divided into directional graphs and undirected graphs. directional graph is a graph whose side has a direction while an undirected graph is a graph whose side has no direction [3]. in the representation of the graph of pt tira austenite tbk cikarang with consumers will be international journal of applied sciences and smart technologies volume 4, issue 2, pages 141–148 p-issn 2655-8564, e-issn 2685-9432 143 used directional graphs. directional graphs are used because on the map there is a oneway path. it will also use labeled graphs. a labeled graph is a graph whose lines are each line assigned a value or label [4]. when the representation of the travel map graph from pt tira austenite tbk cikarang with consumers has been formed, finding the shortest route can be done with various algorithms and one of them dijkstra algorithm. dijkstra's algorithm is an algorithm that uses the greedy principle that each step in choosing a minimum weight line then inserts it in the solution set [5]. the selection of this algorithm is based on its advantages, which are simple and have a good level of accuracy and produce the shortest route that is quite accurate [6]. during this time, the process of finding the shortest route is usually done manually without utilizing technology. however, over time technology is used to facilitate human life. python programming language can be used as one of the alternatives to help the process of finding the shortest route. in python there are keywords, data types, and operators that can help keep programs running as commanded. some keywords contained in python such as and, def, break, return, and global. then the data types contained in python are integers, floating points, complexes, strings, lists, and tuples [7]. there are several operators in python. the operators that assist in the process of mathematical calculations are arithmetic operators e.g. + which are useful for addition and * which are useful for multiplication [8]. assignment operators are useful for placing data into variables, such as = to assign values in the right operand to left operands. then the comparison operator is useful for comparing the left pass with the right pass. comparison operators include ==, !=, >, <, >=, and <=. while logic operators are useful for determining the true value of a value. logical operators consist of and, or, and not [9]. the python programming language was chosen because in addition to being free and easy to learn, the calculation results have a high level of accuracy so as to provide valid results on the program [10]. based on the background that has been presented, the purpose of this study is to determine the shortest route in the distribution of industrial gases from pt tira international journal of applied sciences and smart technologies volume 4, issue 2, pages 141–148 p-issn 2655-8564, e-issn 2685-9432 144 austenite tbk cikarang to 5 consumers with the most purchases of industrial gas in the most types of industrial gas sold in january 2020 to june 2021, namely pt annisa mitra husada, pt indocement tunggal prakarsa, jiangxi thermal power construction, pt solusi bangun indonesia tbk, and universitas kristen indonesia with python programming language. 2 research methodology the type of research used is applied research. applied research is conducted with the aim of applying, testing, and evaluating the ability of a theory applied to solve practical problems[ 11]. the study will apply dijkstra's algorithm to determine the shortest route of industrial gas distribution to 5 consumers. the subject of this study is google maps which is useful in knowing the distance between pt tira austenite tbk cikarang and 5 consumers. meanwhile, the object of this study is the route that connects every consumer in the distribution of industrial gas. this research method is a literature study to gather accurate information related to dijkstra's algorithm and its application in determining the shortest route. the libraries used are reference books and scientific journals. 3 results and discussion the process of distributing industrial gases from pt tira austenite tbk cikarang to pt annisa mitra husada, pt indocement tunggal prakarsa, jiangxi thermal power construction, pt solusi bangun indonesia tbk, and universitas kristen indonesia there is a possibility not to be done every day. therefore, 1 day is chosen which is the frequency of days most often distributed products to each consumer. based on sales data from january 2020 to june 2021, the companies that most often distributed products on mondays are pt annisa mitra husada, pt solusi bangun indonesia tbk, and universitas kristen indonesia. meanwhile, for wednesday is pt indocement tunggal prakarsa and jiangxi thermal power construction and friday is jiangxi thermal power construction. international journal of applied sciences and smart technologies volume 4, issue 2, pages 141–148 p-issn 2655-8564, e-issn 2685-9432 145 use of python programming language in finding the shortest route of industrial gas distribution at pt tira austenite tbk cikarang using a gui (graphical user interface) window. the use of a gui can make it easier for users in the process to find the shortest route. the syntaxes used in creating the gui window use the data types, keywords, and operators available in python. the developed python program will display a gui window then it needs a tkinter module in its syntax. the view that will be seen first by the user after the program in the run is a gui window that has been created. in the gui window contains several components that can help users in finding the shortest route. the components that will appear are like buttons. there are 3 buttons that users will see in the initial display, namely the ‘hari senin’, ‘hari rabu’, and ‘hari jumat’ buttons. the button corresponds to the day in the distribution of industrial gases. in addition, there is also a box for users to enter the starting point and endpoint and information for the user regarding the starting point and endpoint that can be entered in the box. here's the initial view that the user will see. figure 1. program start view the user can enter the starting point and endpoint according to the information that has been provided. on the available information the starting point and the end point of international journal of applied sciences and smart technologies volume 4, issue 2, pages 141–148 p-issn 2655-8564, e-issn 2685-9432 146 monday, wednesday, and friday are different. this is due to the difference in the company that the distribution process does on each day. after the user fills in the starting point and endpoint, the user can press the ‘hari senin’, ‘hari rabu’, and ‘hari jumat’ buttons based on the shortest route to search. when one of the day buttons is pressed, the user will see results in the form of the length of the track and the route to be traversed. here is one example when a user confuses v11 as the starting point and v20 as the endpoint and presses the ‘hari rabu’ button on the program. figure 2. program running this program was created to help and make it easier for users to find the shortest route. therefore, users will be able to search for the other shortest routes with this program. users can press the delete button when they want to delete the previous data and perform another shortest route search process. here is the view that the user will see when pressing the ‘hapus’ button. the python gui program that has been created is proven to help users in finding the shortest route, especially in distributing industrial gases at pt tira austenite tbk cikarang. the results displayed are also in accordance with the results of manual calculations. international journal of applied sciences and smart technologies volume 4, issue 2, pages 141–148 p-issn 2655-8564, e-issn 2685-9432 147 figure 3. after pressing the delete button 4 conclusion the python programming language is utilized in the process of finding the shortest route of industrial gas distribution at pt tira austenite tbk cikarang with dijkstra algorithm by developing gui windows. gui window development can make it easier for users to use programs and make the look more attractive. when the user will search for the shortest route can enter the starting point and endpoint on the program. results will appear when the user presses the day button which corresponds to the frequency of the day most often distributed to each consumer. users can also search for other shortest routes without closing the gui window by pressing the delete button then all data will be deleted. references [1] irma nuriska, “penetapan harga gas iindustri di tinjau dari uundang-undang nomor 5 tahun 1999 tentang larangan praktek monopoli dan persaingan usaha tidak sehat (studi kasus: putusan nomor 09/kppu-l/2016)”, doctoral dissertation, jakarta: universitas yarsi, 2018. international journal of applied sciences and smart technologies volume 4, issue 2, pages 141–148 p-issn 2655-8564, e-issn 2685-9432 148 [2] jong jek siang, “riset operasi dalam pendekatan algoritmis”, yogyakarta: cv andi offset, 2011. [3] d. anggraeni, “graf berarah fuzzy (fuzzy digraf)”, mathunesa: jurnal ilmiah matematika, 2(3), 2013. [4] k. s. dewi, w. wamiliana, m. ansori, “penentuan banyaknya graf tak terhubung berlabel titik berorde tujuh tanpa loop loop”, jurnal siger matematika, 2(2), 77-89, 2021. [5] s. andayani and e. w. perwitasari, “penentuan rute terpendek pengambilan sampah di kota merauke menggunakan algoritma dijkstra”, semantik, 4(1), 2014. [6] r. r. al hakim, et al., “aplikasi algoritma dijkstra dalam penyelesaian berbagai masalah”, expert: jurnal manajemen sistem informasi dan teknologi, 11(1), 42-47, 2021. [7] h. bhasin, “python basics a self-teaching introduction”, virginia: mercury learning and information, 2019. [8] d. amos, d. bader, j. jablonski, f. heisler, “python basics: a practical introduction to python 3”, real python, 2020. [9] j. p. mueller, “beginning programming with python for dummies”, kanada, john wiley & sons, inc., 2018. [10] a. hidayah, “program perencanaan plat beton bertulang berdasarkan sni 2847-2013”, doctoral dissertation, universtas muhammadiyah surakarta, 2017. [11] sugiyono, “metode penelitian kuantitatif, kualitatif, dan r&d”, bandung, penerbit alfabeta, 2013. international journal of applied sciences and smart technologies international journal of applied sciences and smart technologies volume 2, issue 1, pages 35–44 p-issn 2655-8564, e-issn 2685-9432 35 a study of statics on driyarkara electric automobile chassis prototype a. h. astyanto 1,* , y. r. yanto 1 , a. b. w. adi 1 1 department of mechanical engineering, sanata dharma university, yogyakarta, indonesia * corresponding author: achil.herma@usd.ac.id (received 12-06-2019; revised 03-02-2020; accepted 03-02-2020) abstract a design phase commonly initiates either a product invention or development. before it continues to the fabrication steps, a modern computational, enabling a simulation study during the phase, often gives opportunity to develop its effectiveness and efficiency. through several boundary conditions, some analytical and/or computational approaches could be determined. this paper aims to elaborate both static simulations and simple experimental tests on an electric automobile chassis prototype. an application based on finite element analysis was assessed to carry out both von misses stresses and the nodal displacements regarding the construction and/or material strength to a static loads mode. furthermore, an experimental based examination was also executed as a simple physical validation. the studies emphasize accepted results on von misses stresses and nodal displacements for both simulation and experimental approaches. keywords: chassis statics, von misses stresses, nodal displacements, finite element. international journal of applied sciences and smart technologies volume 2, issue 1, pages 35–44 p-issn 2655-8564, e-issn 2685-9432 36 1 introduction electric technologies inside automobiles have become worldwide during this decade. according to several studies, it offers more benefits than the conventional technologies. mainly, electric technologies are claimed more efficient on the fuel consumption. some others agree that the technologies are also eco-friendly on the environmental perspectives. people are now taking more concern to develop electric technologies inside automobiles which do not pollute the fresh air, and can also minimize the sound noisy. in accordance with those several studies related electric technologies positive aspects, indonesian government through a particular regulation lauched the national energy resources general plan, called ruen in 2017 [1]. in the regulation mentioned, sustain developments on electric automobile technologies are fully supported by the government. some incentive packages are provided by the government for both industrial sectors and educational institutions who develop the technologies consistenly [2]. both automobile industries and higher educational institutions welcome warmly the government offering. through several annual exhibitions such as gaikindo indonesia international motor show (giias), and indonesia international motor show (iims), some leaders in automobile industries and also higher educational institutions have offered their performances regarding electric automobile technologies through some prototypes [3, 4]. a prototype design commonly initiates product developments, including technologies in automobiles. nowadays, parametric based on computing aided design (cad) features inside the technologies has improved its ease and efficiency in doing engineering design projects. furthermore, it also enables users delivering simulation studies, including statics during design phases which is available on computing aided engineering (cae) packages. the solution commonly involves a finite element analysis based computational methodology [5, 6]. international journal of applied sciences and smart technologies volume 2, issue 1, pages 35–44 p-issn 2655-8564, e-issn 2685-9432 37 on a static analysis, it is also usual to conduct an experimental test through physical tests as either a comparison or validation towards the analytical/simulation results. it sometimes deals with finding the material element stresses working in the construction, namely von misses stresses, and also the nodal displacements. through the tests, deviations or comparisons between simulation results and physical tests can be determined. however, it can be assessed as a validation to the simulation studies [7, 8]. in this study, statics simulations based on a finite element analysis application and basic experimental tests are elaborated. the results involve von misses stresses and nodal displacements comparison between the simulation and the experimental tests. 2 research methodology figure 1 shows the flowchart of this research. several chassis designs by a cad application initiated the research milestones. some optional designs were taking into considerations during the phase. it remained models with various possibilities geometries in which the construction should able to support of a minimum static load. furthermore, a simple design was chosen based on the model simulation results. the optimum chassis design and the manufactured chassis built is shown by figure 2. model simulations were conducted by a finite element analysis based application. it deals with forces acting on the construction as compensations of the real plan approaches conditions. furthermore, results analysis involved calculations on principal stresses i.e. the maximum and minimum stresses, transformasion of plane stresses, von misses stresses, nodal displacements and factor of safety [9]. the following step was materials preparation. material applied in this studies is square-hollow for both the simulation and fabrication. it also dealt with some requirements regarding the maximum weight limitation of the automobile’s construction. the material properties, including young modulus, material density, the poisson ration and the yield strength of re shown in table 1. international journal of applied sciences and smart technologies volume 2, issue 1, pages 35–44 p-issn 2655-8564, e-issn 2685-9432 38 table 1. material properties material elastic modulus, mpa mass density, poisson’s ratio yield strength, mpa figure 1. flowchart of research the chassis fabrication involved some processes of production, i.e. metal cutting and joining. it covered the processes to manufacture the geometrical shape of the chassis prototype design elected. since the material proposed was the aluminum, then international journal of applied sciences and smart technologies volume 2, issue 1, pages 35–44 p-issn 2655-8564, e-issn 2685-9432 39 (solid metal arc welding) smaw was applied. the global size of the chassis is in length, width and heigth respectively. (a) (b) figure 2. (a) chassis prototype design and (b) after fabricated construction statics simulation usually requires a comparison as a physical validation. in a construction phase of developing an automobile chassis, it deals with a particular mechanics of materials approach namely forces acting in a solid body. however, the forces acting and particular strength of the construction should be determined in accordance with the properties of material strength such as the yield and ultimate strenghts. we can see on figure 3. (a) (b) figure 3. (a) simulation boundary conditions, (b) actual loading condition, and (c) analytical calculation approach international journal of applied sciences and smart technologies volume 2, issue 1, pages 35–44 p-issn 2655-8564, e-issn 2685-9432 40 a chassis in automobiles is a particular construction frame in which the loads of all components including the driver or passanger’s weight are received and distributed. the newton first law, involving forces acting on a solid body, states that the forces resultant of a moving body increases linearly with the acceleration, and formulated as: ∑ (1) meanwhile, a normal stress, defined as normal force per unit area, is formulated as: (2) principal stresses, which consists of maximum and minimum stresses, are derived by transforming applied stresses as plane stresses and formulated as: √( ) (3) the safety factor emphasizes the ratio between material yield strength to applied stresses and formulated as: √ (4) here are in the equations: : is the normal force acting on the body : is the material yield strength and : are the applied principal stresses : is the plane shear stress. 3 results and discussions figure 4 describes load vectors, von misses stresses, and analytical space diagram as an approach to the factual condition. on the other hand, figure 5 shows the nodal displacements on the chassis as a result of several applied load magnitudes. it deals with the boundary conditions applied as an approach on the simulation analysis. the red color indicates nodals/places with the higher impact. from the simulation results, it shows that the maximum von misses stress on the chassis reaches up to a magnitude of international journal of applied sciences and smart technologies volume 2, issue 1, pages 35–44 p-issn 2655-8564, e-issn 2685-9432 41 by applying the required load of force. meanwhile, the maximum nodal displacement on that required load reaches up to by applying the equivalent force. figure 4. von misses stresses distribution acting on the chassis it is usual to reach an identification of deformation through visual observation on the chassis conditions [8]. table 2 describes load simulation and experimental results on the maximum nodal displacements in accordance with some visual observations during several loading conditions of the chassis. on applied loads and of forces, respectively, it shows that there are acceptable criterion between the simulation and experimental results. international journal of applied sciences and smart technologies volume 2, issue 1, pages 35–44 p-issn 2655-8564, e-issn 2685-9432 42 figure 5. nodal displacements distribution on the chassis table 2. experimental visual observation applied load experimental visual observation kg kn bending cracking fracture deformation no no no no no no no no no no no no international journal of applied sciences and smart technologies volume 2, issue 1, pages 35–44 p-issn 2655-8564, e-issn 2685-9432 43 4 conclusions statics studies of an electric automobile chassis prototype have been presented on this paper. the results emphasize accepted deviations between simulation and experimental studies. on the required applied load of force, simulation studies show that the maximum von misses stress and nodal displacement reach up to mpa and , respectively. these results describe that the material used, i.e. with mpa of yield strength, for the construction is on the acceptable criteria. on the other hand, the experimental tests show that applying loads do not deform the chassis construction. however, optimization studies are still pursued as further developments in accordance with both the chassis design shapes/geometries and material selection. furthermore, dynamic and lateral loads are proposed to be calculated particularly on the chassis design. acknowledgements the authors would like to acknowledge the institution of research and community service (lppm) sanata dharma university for the funding of these studies. references [1] https://www.cnbcindonesia.com/news/20180224120933-4-5336/dukung-penuhmobil-listrik-sri-mulyani-siapkan-insentif (accessed on 19-02-2018). [2] https://www.cnbcindonesia.com/news/20180224120933-4-5336/menanti-insentifmobil-listrik-di-tanah-air (accessed on 24-02-2018). [3] https://otomotif.kompas.com/read/2018/04/03/182200315/festival-mobil-listrikakan-meriahkan-iims-2018 (accessed on 11-03-2019). [4] https://www.oto.com/berita-mobil/giias-2018-5-mobil-listrik-hadirkan-ragamteknologi-terdepan-21181938 (accessed on 11-03-2019). [5] z. abadi, fauzun, and m. mahardika, “analisa tegangan pada desain frame automatic guided vehicles (agv) dengan pembebanan statis menggunakan international journal of applied sciences and smart technologies volume 2, issue 1, pages 35–44 p-issn 2655-8564, e-issn 2685-9432 44 software abaqus 6.11,” proceeding seminar nasional teknik mesin ke-9 (sntm 9), d51 d54, agustus 2014. [6] j. s. pribadi, fauzun, and m. mahardika, “analisa komponen kristis pada desain automatic guided vehicles (agv) subsystem lifting dengan pembebanan statis menggunakan software abaqus 6.11,” proceeding seminar nasional teknik mesin ke-9 (sntm 9), d15 d20, agustus 2014. [7] m. yamin, d. satyadarma, and o. a. hasanudin, “analisis tegangan pada rangka mobil boogie,” proceeding seminar ilmiah nasional komputer dan sistem intelijen (kommit), 49–56, agustus 2018. [8] n. wahyudi and y. a. fahrudi, “studi eksperimen rancang bangun rangka jenis ladder frame pada kendaraan sport,” journal of electrical electronic control and automotive engineering, 1 (1), 71–74, 2016. [9] f. p. beer, e. r. johnston, j. t. dewolf, and d. f. mazurek, mechanics of materials, sixth ed, mcgraw-hill publishing company, new york, usa, 2012. international journal of applied sciences and smart technologies international journal of applied sciences and smart technologies volume 3, issue 1, pages 83–92 p-issn 2655-8564, e-issn 2685-9432 83 effects of the existence of fan in the wood drying room and the performance of the electric energy wood dryer wibowo kusbandono1,*, petrus kanisius purwadi1 1department of mechanical engineering, faculty of science and technology, sanata dharma university, yogyakarta, indonesia *corresponding author: bowo@usd.ac.id (received 24-01-2021; revised 05-02-2021; accepted 21-03-2021) abstract the purpose of this study is to determine the effect of the presence of a fan in the wood drying room in the drying time of wood. in addition, it is also to determine the performance of the steam compression cycle engine used in wood drying machines and the conditions of air entering and leaving the wood drying room. wood drying machines work on a source of electrical energy. the research was conducted experimentally. variations in the study were carried out on the presence of fans in the drying room: (a) there were no fans and (b) there were 2 fans. the dried object is a sengon wood board, which has a length of 2 m, a width of 20 cm, and a thickness of 2 cm. the number of wooden planks is 70 wooden planks of uniform size. the wooden planks before drying have a moisture content of 29.6%, and when dry, have a moisture content of 10%. the research gave the following results: (a) if there are 2 fans in the drying room, the time needed to dry the sengon wood planks is around 42.6 hours, whereas if there is no fan around 49.9 hours (b) the average coefficient of performance (cop) of the steam compression cycle engine is 10.65 (c) the air condition enters the drying room when international journal of applied sciences and smart technologies volume 3, issue 1, pages 83–92 p-issn 2655-8564, e-issn 2685-9432 84 there are 2 fans, has a dry ball air temperature of 40oc with a relative humidity of 32% rh and the air condition when it comes out, has a dry ball air temperature of 28oc with a relative humidity of 73% rh. keywords: wood drying machine, steam compression cycle, electricity, cop 1 introduction wood is the main material in the furniture industry. before being processed into finished goods, the wood must be dry. if it is not dry, the products can change shape into unwanted shapes. one of the problems that exist in the wood industry is how to dry wood that does not interfere with production. at this time, generally the wood drying process is carried out with the help of solar energy or with fuel energy from wood that is no longer used. for energy from wood fuel, besides being complicated, it is also not environmentally friendly. in addition, the time used for the drying process is long. so it needs careful calculations so that production is not disturbed by the absence of dry wood. the use of solar energy is more practical, more environmentally friendly, cheaper, with unlimited energy sources, free and available everywhere, but this method cannot be relied on when the rainy season arrives. even though the rainy season lasts quite a long time. it is necessary to look for other alternatives, which are more practical, environmentally friendly and can be done anytime and anywhere. the solution is to use electrical energy. many studies related to the drying process with electrical energy have been carried out. a suitable solution for the use of electrical energy is to use a steam compression cycle machine. as has been done by these researchers, in the drying process. steam compression cycle machine, used to produce dry and hot air. with dry and hot air, the object drying process can run well. in the research that has been done, the dried objects are different. mitsunori [1] conducted research with the cloth object. balioglu [2] and bison [3] conducted research with the object of laundry. kusbandono and purwadi [4,5] conducted research with the object to be chilled. international journal of applied sciences and smart technologies volume 3, issue 1, pages 83–92 p-issn 2655-8564, e-issn 2685-9432 85 from the research that has been conducted by these researchers, a question arises for the authors, can the wood drying process be carried out using electrical energy involving a steam compression cycle machine. as it is known, that wood is a solid, dense, hard object, with a relatively large size, with a very low moisture content in the wood. starting from this problem, the research was carried out. 1.1. steam compression cycle the main components of a steam compression machine include: compressor, condensor, capillary tube and evaporator. the working fluid used in the vapor compression cycle is a refrigerant. figure 1 presents a series of main components of a steam compression engine and figure 2 presents a simple vapor compression cycle shown on the p-h diagram. figure 1. circulation of the main components of a steam compression cycle engine figure 2. the vapor compression cycle in the p-h diagram international journal of applied sciences and smart technologies volume 3, issue 1, pages 83–92 p-issn 2655-8564, e-issn 2685-9432 86 the process of the steam compression cycle engine includes: (1) process 1-2: isoentropic compression (2) process 2-2a: desuperheating (3) process 2a-3: condensation or condensation (4) process 34: throttling or lowering pressure isoenthalpy (fixed enthalpy) (5) 4-1 process: evaporation or refrigerant evaporation process. during the evaporation process, heat flows from the ambient air around the evaporator to the refrigerant that flows in the evaporator pipe. the amount of heat absorbed by the mass-refrigerant unity evaporator is 𝑄in. the amount of heat released by the mass-refrigerant unity condensor is 𝑄out. during the compression process, the compressor works to increase the refrigerant pressure, from low pressure to high pressure or from the evaporator working pressure to the condensor working pressure. work done by the compressor during the desuperheating and condensation process, heat flow occurs from the condensor out into the environment around the condensor. the mass-refrigerant unit is equal to 𝑊in. the amount of work done by the compressor is the power supplied by a power source or from electrical energy to the massrefrigerant unity compressor. in this steam compression cycle, the subcooling and superheating processes are eliminated. the amount of heat absorbed by the mass-refrigerant unity evaporator (𝑄in) can be expressed by 𝑄in = ℎ1 − ℎ4. (1) the amount of heat released by the mass-refrigerant unity condensor (𝑄𝑜𝑢𝑡) can be expressed by 𝑄out = ℎ2 − ℎ3. (2) the amount of work performed by the mass-refrigerant unity compressor (𝑊𝑖𝑛) can be expressed by 𝑊in = ℎ2 − ℎ1. (3) the values of ℎ1,ℎ2, ℎ3 and ℎ4 are the enthalpy value of the refrigerant when it enters the compressor, the enthalpy of the refrigerant when it leaves the compressor, the enthalpy of the refrigerant when it leaves the condensor and the enthalpy of the refrigerant when it enters the evaporator. international journal of applied sciences and smart technologies volume 3, issue 1, pages 83–92 p-issn 2655-8564, e-issn 2685-9432 87 the performance (coefficient of performance or cop) of the steam compression cycle machine used in wood drying machines with electrical energy is the ratio between the useful energy for mass-refrigerant unity with the amount of energy required for mass-refrigerant unity. the coefficient of performance or cop of the steam compression cycle of this electric energy wood drying machine can be expressed by cop = (𝑄in − 𝑄out) 𝑊in . (4) the heat absorbed by the evaporator causes the ambient air that passes through the evaporator to dry, and the heat released by the condensor causes the temperature of the air that passes through the condensor to increase to become hot. in equation (4), the energy used to drive the fan has not been calculated: the evaporator fan, the condensor fan and the fan in the drying room. 1.2. open air system wood drying machine the wood drying machine used in this study uses an open air system. in this open air system, the air used as a wood drying medium is taken from the outside air. the outside air is introduced into the drying room, by first passing through the evaporator and condensor. outside air can flow in because of the evaporator fan and condensor fan. after the air is put into the drying room, it performs the wood drying process. the air that has been used to dry wood is then flown out of the dryer, and is no longer used. 2 research methodology in this section we describe about how this research was carried out among others the research methods and research variations, research flow, equipment used, dried object, and how to collect data. 2.1 research methods and research variations the research was conducted experimentally. variations in the study were carried out on the presence of fans in the wood drying room: (a) there was no fan and (b) there were two fans in the wood drying room. international journal of applied sciences and smart technologies volume 3, issue 1, pages 83–92 p-issn 2655-8564, e-issn 2685-9432 88 2.2 research flow the research flow follows the flow as presented in figure 3. figure 3. research flow 2.3 equipments the rotary compressor power used in the steam-compression cycle engine is 1 hp. the other main component, its size adjusts to the amount of compressor power. the compression cycle uses r134a refrigerant. the power per fan used in the wood drying room is 30 watts. the evaporator fan has 17.5 watts of power and the condensor fan has a power of 20 watts. the wood drying machine has a total length of 350 cm, a total height of 180 cm and a total width of 140 cm. international journal of applied sciences and smart technologies volume 3, issue 1, pages 83–92 p-issn 2655-8564, e-issn 2685-9432 89 2.4 dried object the object that was dried was a wet sengon wood board with an initial moisture content of 29.6%. the wet wooden planks are dried until they have a moisture content in the wood of 10%. the size of the wooden boards to be dried is uniform, has a length of 2 m, a width of 20 cm and a thickness of 2 cm. number of board to dry: 70 boards of wood, for one drying. wooden boards are arranged on racks, shelves are arranged from bottom to top, one shelf contains 5 wooden boards (horizontal direction), there are 14 shelves arranged. 2.5 data collection before entering wet wood, the engine is started first. data collection starts after all the wet wood is in the drying room, and the door to the wood drying room has been closed. data collection was stopped after all sengon wood was dry (having moisture content in the wood <10%). measurement of moisture content in wood using a digital moisture meter. temperature and humidity data were measured using a thermocouple and hygrometer. meanwhile, for low pressure (evaporator working pressure) and high pressure (condensor working pressure) in the steam compression cycle using a pressure gauge. the enthalpy data is taken from the p-h diagram. 3 results and discussion figure 4 presents the drying process of sengon wood boards from time to time, from the initial condition of the wet wood boards with an average moisture content of 29.6% to an average moisture content of 10%. it appears that the presence of a fan in the wood drying room affects the drying time of the wood. the existence of a fan, causes the airflow to be faster and the drying time is faster. in other words, the drying time of wood is affected by the speed of air flow in the drying room. the greater the speed of air flow across the wood surface, the faster the drying time. if there is no fan in the drying room, it means that the air flow that occurs in the drying process is only caused by the evaporator fan and condensor fan. from table 1 and figure 4, information is obtained, if there is no fan in the drying room, the wood drying time is 49.9 hours, and if there is a fan, the wood drying time is only 42.6 hours. the use of 2 fans can shorten the drying time by about 7.3 hours or international journal of applied sciences and smart technologies volume 3, issue 1, pages 83–92 p-issn 2655-8564, e-issn 2685-9432 90 speed up the drying time by 14.6%. even though it is able to shorten the wood drying time, the wood drying process requires additional electrical power which is used to drive 2 fans. the air condition when there are 2 fans in the drying room is presented in table 2. further research is needed, regarding the effect of adding a fan on the optimal work of a wood drying machine. figure 4. moisture content in sengon wood from time to time table 1. drying time of sengon wood no research variations drying time t t (hour) 1 there is no fan in wood dryer room 49,9 2 there are 2 fans in the wood dryer room 42,6 table 2. air conditions in and out of the sengon wood drying room no conditions in the wood drying room the air condition exits the wood drying room the air condition enters the wood drying room tdb ( oc) rh (%) tdb ( oc) rh (%) 1 there is not fan 30 68% 40,5 31 2 there are 2 fans 28 73% 40 32 with increasing air flow velocity, the greater the air flow across the wood. the greater the air flow, the greater the ability of air to take water from the wood. because the air system is made open, the air that has passed through the wood is immediately flown out of the drying chamber, and the air is replaced with new air coming from the condensor. 0 5 10 15 20 25 30 35 0 10 20 30 40 50 w a te r c o n te n t, % time, hour tanpa kipas dengan 2 kipas without fan with 2 fans international journal of applied sciences and smart technologies volume 3, issue 1, pages 83–92 p-issn 2655-8564, e-issn 2685-9432 91 the new air used to dry the wood is dry and the temperature is high enough. the component of the vapor compression cycle which functions to make the air dry is the evaporator and the component that makes the air a high enough temperature is the condensor. having 2 fans in the wood drying chamber does not have much impact on the characteristics of the steam compression cycle machine. the air fan has more impact on the evenness of the air flow velocity that occurs in the drying room. more even and easier to get out of the drying room. because the air flow system is open, the airflow flowing through the evaporator and condensor is more dominant due to the evaporator fan and condensor fan only. however, the work of the evaporator fan and the condensor fan work becomes lighter with the fan in the drying room. average characteristics of the steam compression cycle machine on a wood drying machine are presented in table 3. table 3. performance of steam compression cycle machine on wood drying machine (sengon wood drying) no tevap,°c tkond,°c qin(kj/kg) qout(kj/kg) win(kj/kg) cop 1 6 50 130,3 157,3 27 10,65 4 conclusion the results of the research for the electric energy wood drying machine steam compression cycle of this closed air system are: (a) if there are 2 fans in the drying room, the time required to dry the sengon wood planks is 42.6 hours, whereas if there is no fan for 49.9 hours; (b) the coefficient of performance or cop of the wood drying machine with an average electrical energy of 10.65; (c) the air enters the drying room when there are 2 fans, has a dry ball air temperature of 40oc with 32% rh and the dry ball air temperature out from 28oc drying chamber with 73% rh. this research can be developed to determine the relationship between the condensor working temperature and the cop and the time required for the drying process or it can be developed by looking for the relationship between the working temperature of the evaporator and the performance of the dryer and the time required for the wood drying process. international journal of applied sciences and smart technologies volume 3, issue 1, pages 83–92 p-issn 2655-8564, e-issn 2685-9432 92 references [1] t. mitsunori, “dehumidifying and heating apparatus and clothes drying machine using the same”, european patent specification, ep 2 468 948 b1, 27.11.2013, 2013. [2] balioglu, “heat pump laundry dryer machine”, patent application publication, pub. no: us 2013/0047456 a1, april 2013. [3] bison, “heat pump laundry dryer and a method for operating a heat pump laundry dryer”, patent application publication, pub. no: us 2012/0210597 a1, 2012. [4] w. kusbandono and p.k. purwadi, “pengaruh adanya kipas yang mengalirkan udara melintasi kondensor terhadap cop dan efisiensi mesin pendingin showcase.” prosiding seminar nasional xi rekayasa teknologi industri dan informasi, 313-317, 2016. [5] w. kusbandono and p.k. purwadi, “cop mesin pendingin refrigeran sekunder.” jurnal penelitian, 19 (1), 79-86, 2015. international journal of applied sciences and smart technologies volume 4, issue 1, pages 97-108 p-issn 2655-8564, e-issn 2685-9432 97 this work is licensed under a creative commons attribution 4.0 international license comparison of static signature identification using artificial neural networks based on haar, daubechies and symlets wavelet transformations r. a. kumalasanti1,* 1department of informatics, faculty of science and technology, sanata dharma university, yogyakarta, indonesia *corresponding author: rosaliasanti@usd.ac.id (received 21-05-2022; revised 28-05-2022; accepted 29-05-2022) abstract signature is a biometric attribute that is quite important for each individual that can be used as self-identity. until now, the signature is still used as a sign of legal approval and is agreed upon by everyone. this makes the signature worthy of attention from a security aspect. various approaches have been proposed in the development of signature identification to minimize signature forgery. this study will discuss the identification of signatures by using the image of the signature on paper. this identification consists of two processes, namely training and testing by utilizing artificial neural networks backpropagation and wavelet transform. optimal results are obtained by using ann which has learning rate 0,09, two hidden layers, each 20 and 10 nodes with the most superior wavelet haar reaching 94.44% keywords: signature, ann, identification, backpropagation, wavelet international journal of applied sciences and smart technologies volume 4, issue 1, pages 97-108 p-issn 2655-8564, e-issn 2685-9432 98 1 introduction the rapid development of technology in this era has changed the lifestyle in society. the need for proof of the validity of a transaction or document is important because of the increasing number of diverse activities. proof of validity in the form of personal data such as signatures is still used as personal data and the validity of a document. nowadays, many self-data recognition using electronics have been developed, such as iris recognition, fingerprints, facial recognition, and so on, but until now the conventional method in the form of signatures is still very much needed. signatures are considered a fairly easy way and without expensive costs so that in this modern era, signatures are still considered valid proof of legal use. personal data is an attribute of each individual whose validity needs to be protected. a signature is a biometric attribute that has ownership which is physiologically the hallmark or character of each individual. biometrics is the science of automatic recognition of individuals. biometrics depends on a person's physiology and behavior, so this attribute is attached to an individual with its own uniqueness and characteristics. this proves that the signature is a very important individual attribute and the need for ownership protection. visually, it is difficult for the human eye to compare signatures that look similar even with the same pattern. this limitation makes signatures often misused by irresponsible parties. solo pos once published news in the media that there had been fraudulent acts committed by cpns applicants. the fraud that occurred in solo was an act of falsifying signatures on diplomas or important documents contained in the application files of candidates for civil servants (cpns) by falsifying signatures on documents as much as 40% and this number is not a small number [1]. the news shows that many acts of forging signatures have been carried out and as if it is considered normal. of course this is detrimental to the owner of the signature and the recipient of the fake document. systems with artificial intelligence are needed to help humans compare signatures. optimal accuracy is the goal of system development in order to minimize signature forgery. the system built will be able to reduce human work in comparing similar signatures with physical visual limitations. a system with advantages in the field of pattern recognition will be built using an artificial neural network and wavelet international journal of applied sciences and smart technologies volume 4, issue 1, pages 97-108 p-issn 2655-8564, e-issn 2685-9432 99 switching. ann consists of neurons that can communicate between network layers. there are four types of wavelets to find optimal accuracy. the input data used is a static signature (offline) with paper media and then scanned with a scanner. in this study, wavelets are used to perform pre-processing. wavelet is a mathematical function that offers high temporal in high frequency images, while low frequencies will be better frequencies [2]. this advantage is expected to provide quality information so that it can support the learning process in the network. 2 research methodology handwriting is an interesting object challenging pattern recognition. there are also studies that use handwriting as an object for optical character recognition. the system can be used to recognize english characters (a-a,a-z), numbers (0-9) as well as special characters or symbols (x,$,%,^,&,*). the research was conducted by training a neural network using the backpropagation algorithm. the results of handwriting pattern recognition using neural networks achieve very high accuracy [3]. other research related to signatures was also carried out by using backpropagation ann. identification of the signature is expected to provide a sense of security for the owner of the signature. the system is able to recognize the signature and the owner of the signature to reduce the act of forging signatures. this system produces signature identification in the form of accuracy and the face of the signature owner [4]. the next research is about speech signal pattern recognition using wavelet switching and artificial intelligence. speech signal is biometric data such as handwriting, fingerprint, iris and so on. biometric is data owned by individuals and has a unique character. the pattern recognition phase of the system built with a neural network design has provided a fairly high recognition accuracy. the accuracy obtained from the speech signal pattern recognition is 81.96% [5] 2.1 handwriting signature a signature is an individual identity that has its own characteristics and patterns. the uniqueness of the signature is often used as an attribute or a marker of the validity of a document. until now, signatures are still often used anywhere and anytime to mark the international journal of applied sciences and smart technologies volume 4, issue 1, pages 97-108 p-issn 2655-8564, e-issn 2685-9432 100 validity of every transaction, even on important documents. visually, it is difficult for the human eye to compare existing signatures. at first glance, the signature looks similar, but it could be that the signature was forged by an irresponsible person. human physical limitations (eye disorders, fatigue and focus) can affect the results of interpretation so that it can result in someone's inaccuracy in seeing and even comparing signatures. based on these problems, a system is needed that can assist in comparing signatures that look the same but can be forged. the system is expected to ease human work in determining the authenticity of signatures. the signature itself is divided into two techniques, namely offline (static) and online (dynamic). a signature with an offline technique is a signature that is done on a piece of paper and scanned for processing, while a signature with an online technique is a signature that is done on a digitizer device. basically there are three types of forgery, namely random forgery, a signature that is done accidentally because forgers only know the name of the owner of the signature and use his name with his own pattern to forge, while simple forgery is a signature made by someone who has absolutely no practice or even has no previous experience in imitating signatures, and skilled forgery is a sign hand made by someone who has experience in copying signatures [6]. 2.2 artificial neural network artificial neural network (ann) is one of the reliable networks in recognizing an object. ann is a mathematical model by analogizing how the human brain works in processing an object. humans can recognize objects by starting with data collection as memory input so that when they meet the object at a different time, someone will remember the object. the workings of neurons in the human brain have the power to collect signals obtained from adjacent neurons through dendrites. electrical activity will be transmitted by neurons through axons consisting of thousands of branches. neurons will work when they get stimuli from outside to be conveyed to neighboring neurons so they can give the correct response. the network is interconnected through several stages or processes and has parallel distributed processing [7]. in ann, neurons are referred to as nodes and the parts are divided into three parts, namely the input layer, hidden layer and output layer. the number of nodes and the number of layers depends on the needs of the system, so in the ann learning phase requires some simulation to get the most international journal of applied sciences and smart technologies volume 4, issue 1, pages 97-108 p-issn 2655-8564, e-issn 2685-9432 101 appropriate and optimal ann architecture. ann characteristically has a layer consisting of a number of nodes (input layer, hidden layer and output layer) which holds the activation function. ann with input layer, hidden layer and output layer can be seen in figure 1 below. figure 1. artificial neural network representation [8] 2.3 backpropagation in this study, ann was collaborated with the backpopagation algorithm in identifying signature patterns. the way the backpropagation algorithm works is by utilizing an error condition in the output layer, it will change the weight by means of back propagation after the forward propagation process is complete [9]. the backpropagation algorithm cycle consists of two stages, namely forward pass (forward propagation) and backward pass (backward propagation). the backpropagation algorithm is a guided learning approach because the desired result is already known. after finishing processing, there may be differences in the predicted results. if this happens, the network will immediately be subjected to backward propagation to get a smaller error tolerance. a smaller error will also provide an optimal percentage of the identification results. 2.5. wavelet transform preprocessing in this study utilizes wavelet transfer in preprocessing to represent the time and shape frequency signal. wavelet switching is one of the mathematical tools that has a transfer layer function so that it can produce coefficients that represent the characteristics of the signal [10]. wavelet has advantages, namely in terms of signal analysis. the wavelet's signal analysis is multi-resolution so that it provides better signal international journal of applied sciences and smart technologies volume 4, issue 1, pages 97-108 p-issn 2655-8564, e-issn 2685-9432 102 accuracy and analysis. the advantages of wavelets are because of their advantages in providing multi-resolution signals. it aims to analyze features that may not be detected by one resolution so that they will still be detected using another resolution [11] [12]. discrete wavelet transform (dwt) is an option as an efficient method that can be used to retrieve information in the image in the form of discrete data. in utilizing wavelets, it is also necessary to determine the level of decomposition to obtain optimal accuracy. low resolution image is decomposed by dwt into different subbands, namely low-low (ll), low-high (lh), high-low (hl), and high-high (hh). figure 2 is a decomposition image on wavelet level 2. figure 2. level 2 decomposition on wavelet [10] 3 results and discussion identification of the signature image is divided into two phases, namely training and testing. this image data is obtained from 15 individuals where each individual will write as many as 6 signatures as input data. the signature image sample with a size of 256x256 pixels will be subjected to preprocessing. the static signature image is scanned using a scanner to reduce limitations on rotation, and shooting distance. this digital image is then cut into a size of 256x256 pixels and then converted into a black and white image (threshold). black and white images will make it easier for the system to learn image patterns and provide a lighter computing load than greyscale or rgb. the black and white image will then be subjected to some predetermined wavelets. this study uses several kinds of wavelets to obtain optimal accuracy. normalization is done before training using backpropagation ann. in this training phase, the results will be weighted. each weight according to its respective id will be stored in the data international journal of applied sciences and smart technologies volume 4, issue 1, pages 97-108 p-issn 2655-8564, e-issn 2685-9432 103 store. the results of this training are in the form of weights that will be stored in the data store. this stored weight will be input data which will later be seen for its ability to produce pattern characteristics. this input data is then tested on several existing wavelets. the signature image identification flow can be seen in figure 3. figure 3. flow diagram each signature image is resized to 256x256. if the image size is smaller, the computational load will be lighter, but the pattern characteristics will also decrease. the size of 256x256 is an option in implementing signature identification. the simulation carried out has the aim of getting optimal accuracy from the existing wavelets. the wavelets being tested are haar wavelets, daubechies 2, daubechies 3, and symlets 3. the selection of these four wavelets refers to previous studies which obtained a fairly high accuracy in pattern recognition identification. the wavelet decomposition used is level 4 in obtaining information. the choice of level or decomposition is also related to the amount of information. the higher the level on the wavelet, the less information will be obtained. so in this study is expected to provide optimal identification results as well. handwriting signature image handwriting signature image wavelet transform wavelet transform normalization normalization ann backpro testing ann backpro training identification image international journal of applied sciences and smart technologies volume 4, issue 1, pages 97-108 p-issn 2655-8564, e-issn 2685-9432 104 the learning rate used is 0.9 with two hidden layers of 20 and 10 nodes, respectively. figure 4 shows the process of image signature training, and the simulation of the four wavelets can be seen in figure 5. the figures below also shows the mse and epoch results for each wavelet. figure 4. training on the identification process table 1. image identification comparison results with level 4 wavelet switching, learning rate 0.09. wavelet epoch mse accuracy haar 100000 0,0909 94,44% db2 100000 0,0634 87,78% db3 1382 0,0858 91,11% sym3 16724 0,1022 87,78% international journal of applied sciences and smart technologies volume 4, issue 1, pages 97-108 p-issn 2655-8564, e-issn 2685-9432 105 wavelet haar wavelet daubechies 2 wavelet daubechies 3 wavelet symlets 3 figure 5. ann performance results using wavelet the accuracy results from table 1 show that each wavelet has its own reliability in providing accuracy. wavelet haar, epoch 100000, mse 0.0909 is the most superior by getting an accuracy of 94.44%. backpropagation ann achieves optimal results in performing static signature pattern recognition by utilizing the advantages of wavelets in preprocessing 4 conclusion the identification of the signature image has been successfully applied by involving two processes, namely training and testing. based on the simulation that has been done, it can be concluded that the comparison of signature identification using wavelet switching and ann has been successfully constructed. image size 256 x 256 by scanning using a scanner and preprocessing several simulations have been carried out to get optimal results. simulations have been carried out on several types of wavelets, international journal of applied sciences and smart technologies volume 4, issue 1, pages 97-108 p-issn 2655-8564, e-issn 2685-9432 106 namely haar, daubechies 2, daubechies 3, symlets 3, with 4 levels of decomposition and a learning rate of 0.09. the simulations carried out resulted in the most superior accuracy being 94.44% using the haar wavelet transformation. references [1] m. khamdi, "solo pos," 20 october 2013. [online]. available: www.solopos.com. [accessed 13 august 2021]. [2] m. g. haleem, l. e. george and h. m. bayti, "fingerprint recognition using haar wavelet transformation and local ridge attributes only," international journal of advanced research in computer science and software engineering, 4 (1), 122130, january 2014. [3] s. s. kharkhar, h. j. mali, s. s. gandhari, s. s. shrikande and d. k. chitre, "handwriting recognition using neural network," international journal of engineering development and research, 5(4), pp. 1179-1181, december 2017. [4] p. sovia, m. yanto and w. nursany, "implementation of signature recognition using backpropagation," journal of computer science and information technology, 1(1), 30-44, december 2016. [5] o. rangel, d. amaya and o. ramos, "pattern recognition of speech signals using wavelet transform and artificial intelligence," international journal of applied engineering research, 12(21), 11088-11093, august 2017. [6] y. y. munaye and g. b. tarekegn, "signature recognition system using artificial neural network," european journal of computer science and information technology, 6(2), 42-47, april 2018. [7] n. z. zacharis, "predicting student academic performance in blended learning using artificial neural network," international journal of articial intelligence and applications, 7(5), 17-29, september 2016. [8] p. g. patil and r. s. hegadi, "offline handwritten signature classification using wavelet and support vector machines," international journal of engineering science and innovative technology, 2(3), 37-42, may 2013. international journal of applied sciences and smart technologies volume 4, issue 1, pages 97-108 p-issn 2655-8564, e-issn 2685-9432 107 [9] i.p.b.d. purwanta, c.p.kuntor adi,n.p.n.p. dewi, “backpropagation neural network for book classification using the image cover” international journal of applied sciences and smart technologies, 2(2), 179-196, december 2020. [10] suma'inna, "detection of cardiac abnormalities based on ecg pattern recognition using wavelet and artificial neural network," pushpa publishing house, 76(1), 111-122, may 2013. [11] p. meibner, h. watschke, j. winter and t. vietor, "artificial neural network based material parameter identification for numerical simulations of additively manufactured parts by material extrusion," mdpi, 12(2949), 1-28, december 2020. [12] j. m. castillo, j. m. cspedes and h. e. cuchango, "water level prediction using artificial neural network model," international journal of applied engineering research, 13(19), 14378-14381, july 2018. international journal of applied sciences and smart technologies volume 4, issue 1, pages 97-108 p-issn 2655-8564, e-issn 2685-9432 108 this page intentionally left blank international journal of applied sciences and smart technologies volume 4, issue 2, pages 173 184 p-issn 2655-8564, e-issn 2685-9432 173 this work is licensed under a creative commons attribution 4.0 international license employee presence using body temperature detection and face recognition arif ainur rafiq1,*, erna alimudin1, and della puspa rani1 1department of electronics engineering, polytechnic state of cilacap cilacap, indonesia *corresponding author: aar@pnc.ac.id (received 18-08-2022; revised 16-09-2022; accepted 21-09-2022) abstract employee performance can be measured by their presence or attendance, which applies to both civil servants and non-civil servants. because the attendance system still uses the manual technique, it is considered inefficient due to the potential for data fraud and attendance problems. in addition, the government is adopting precautions against viruses in office buildings to maintain business continuity while the pandemic is being addressed. the goal of this study was to employ a facial recognition system and temperature measurement to lower the danger of covid 19 transmission while also minimizing paper use by using a facial recognition system as a substitute for presences. it has so far permitted the digitization of formerly manual sights. the opencv library allows computers to detect faces using the haar cascade classifier approach and python as a programming language. a logitech c930e webcam with a resolution of 1080p at 30fps was used to capture facial data, which was then processed on a raspberry pi 4 microprocessor. it uses an mlx90614 sensor to monitor body temperature, which is controlled by an http://creativecommons.org/licenses/by/4.0/ international journal of applied sciences and smart technologies volume 4, issue 2, pages 173 184 p-issn 2655-8564, e-issn 2685-9432 174 arduino uno microcontroller. it is well integrated into the database based on body temperature testing and facial recognition. the development of a more accurate temperature sensor reading method for distance and employee body temperature is a priority for future research. keywords: arduino uno, haar cascade classifier, presence, temperature detection, face recognition 1 introduction the presence or attendance of employees, both civil servants and non-civil servants, is one sign of their performance. it examines the relevance of attendance data collecting, its relationship to performance indicators that become quality standards, and material for evaluating and improving the quality of public services, based on regulation of government of the republic of indonesia no. 30 of 2019 concerning the work assessment of civil servants. [1] manually filling attendance from for recording attendance data is certainly inefficient because of the potential for fraud, such as data falsification or human error. furthermore, administrative recapitulation is done manually which takes a long time because a lot of data must be entered. many attendance systems have been built with pattern recognition that recognizes unique human physical traits, such as facial recognition and fingerprint recognition, as technology advances. these distinct reduce fraud during the registration procedure. [2] there are many biometric identification method used for attendance system. the implementation of biometric recognition systems can be based on physical or behavioral characteristics, such as the iris, voice, fingerprint, and face. [3] one of well-discussed field nowadays in biometric identification is fingerprint. fingerprint is considered a secured way because it has a unique pattern that is different for each person. [4] unfortunately, systems that use fingerprints require the user to make direct physical contact with the fingerprint reader for a few seconds to perform fingerprint pattern recognition. this can increase the risk of contamination by harmful pathogenic microbes or cross-contamination of food and air by other users. [5] to avoid contamination by international journal of applied sciences and smart technologies volume 4, issue 2, pages 173 184 p-issn 2655-8564, e-issn 2685-9432 175 physical contact in corona virus disease (covid) 19, the presence system can use other bometric identification, one of which is face recognition. face recognition has often been done for various purposes. security systems and administrative management through face recognition systems have also started to be used and developed in industry, business, and offices. [6] a face recognition-based attendance system has been carried out and also obtained a high accuracy score of 80%. in this study, the eigenface and haar cascade classifier methods were used for student attendance.[7] the object detection method created by paul viola and michael jones is the haar cascade algorithm, which is divided into several areas of the face such as the eye area, nose area, mouth area, and others. it depends on the image that has been trained. [8] decree of the health minister of the republic of indonesia number hk.01.07/menkes/382/2020 about community health protocol in public places and facilities for prevention and corona virus disease control 2019 (covid-19) states that there must be a body temperature measurement at the guest entrance and employees. if the temperature is found greater than 37.3 oc, employee or guest will not allowed enter unless stated negative/ non-reactive covid-19 after laboratory examination in the form of rt-pcr examination is valid for 7 days or rapid test valid for 3 days, before entering the office. [9] moreover, guidelines for the prevention and control of corona virus disease 2019 (covid-19) in offices and industries in supporting business continuity in a pandemic situation from minister of health of the republic of indonesia's decree no. hk.01.07/menkes/328/2020 states that people with fever (≥38°c) or a history of fever; or symptoms of respiratory system disorders such as runny nose / illness throat/cough and no other cause based on convincing clinical picture and in the last 14 days before the onset of symptoms had a history of travel or living in countries/regions that report local transmission or have history of contact with confirmed cases of covid19 classified as person under monitoring. [10]. based on the decree from minister of health of the republic of indonesia which has been described, conclude that temperature measurement is one of the important steps to prevent covid 19 transmission. therefore, it is necessary to design a device that can international journal of applied sciences and smart technologies volume 4, issue 2, pages 173 184 p-issn 2655-8564, e-issn 2685-9432 176 measure the body temperature of each employee before entering the office. this is done to prevent the spread of covid-19. this device also replaces the manual presence that still uses paper. thus, manipulation of presence data can be avoided. the proposed device is expected to be used in the office. 2 research methodology methodology used here the working procedure of the tool is depicted using block diagrams. later on, this block diagram will be utilized as a guide for completing the final project. the system's block diagram is shown in figure 1. figure 1 system block diagram the following is an explanation of the block diagram system's function: 1. the raspberry pi serves as the system's controller. 2. the temperature sensor is used to determine an employee's body temperature. 3. monitor functions include keeping track of staff temperature readings. 4. the camera take photographs employees' faces as an indication of their presence. 5. a sound indicator in the form of a buzzer. 6. the database is used to keep track of staff attendance. international journal of applied sciences and smart technologies volume 4, issue 2, pages 173 184 p-issn 2655-8564, e-issn 2685-9432 177 the system will operate based on the following principle: the camera will work when personnel is objects or faces. it will then appear on the monitor screen. furthermore, the temperature sensor will work when the thing is in front of the sensor. the results will be presented on the monitor screen and directly recorded into the database. the flowchart is a standard to describe the process. the system design flowchart that is carried out can be seen in figure 2. figure 2 flowchart design system when an object is detected, the camera detects the employee's face, and if it fits, the temperature sensor immediately reads the employee's temperature. if the temperature is less than 38°c, the staff have arrived and will enter the room directly. if the employee's no no z yes yes international journal of applied sciences and smart technologies volume 4, issue 2, pages 173 184 p-issn 2655-8564, e-issn 2685-9432 178 temperature rises above 38 degrees celsius, he will be followed up on his medical history and subjected to health protocols, such as self-isolation at home. employee data will also be entered into the database. the design of the tools to be created is referred to as mechanical design. the skeleton section of the gadget is built of a triplet base material. the camera is made up of aluminum in parts. the frame is made of wood since it is easy to develop and arrange out components like raspberry, arduino, and monitors. the mechanical design is shown below. figure 3 (a) shows the front view, and figure 3 (b) shows the back view of the mechanical design of the tools created. (a) (b) 3 results and discussions this section shows how the system was tested after it was built. obtain data to determine whether or not the tool designed was successful. 3.1 mlx sensor temperature measurement testing the face was brought closer to the mlx temperature sensor with a predetermined distance of 1 meter, and a non-contact infrared thermometer was used as a comparison. table 1 shows the results of temperature sensor testing using a non-contact infrared thermometer. figure 3 mechanical design from (a) front view and (b) back view international journal of applied sciences and smart technologies volume 4, issue 2, pages 173 184 p-issn 2655-8564, e-issn 2685-9432 179 table 1. temperature sensor comparison test with non-contact infrared thermometer no mlx temperature sensor termometer infrared non contact difference score user 1 35,6°c 36,5°c 0,9°c 1 dita 2 34,5°c 36,2°c 1,7°c 2 jahrona 3 34,8°c 35,9°c 1,1°c 3 della 4 34,6°c 35,5°c 1,1°c 4 ira 5 35,1°c 35,9°c 0,8°c 5 vemmi 6 35,8°c 36,4°c 0,6°c 6 ferahma according to the findings of the testing, there was a discrepancy between the mlx sensor readings and the non-contact infrared thermometer, with the lowest contrast distinction of 0.6°c and the highest distinction of 1.7°c. the temperature was shallow while monitoring body temperature because the temperature detection distance is too far from the sensor, causing the temperature sensor to be less accurate when reading body temperature. 3.2 user identification testing the goal of the test is to collect photos of people's faces as data, which will be stored in a folder that will be used as a database to match the system to face detection. because object detection accuracy in the haar cascade classifier approach depends on the impacts of the training, images were taken as much as feasible to maximize the detection results. figure 4 shows the identification test results at a distance of one meter. the results of successful and unsuccessful identification are shown in table 2. figure 4 user face identification in 1 meter away international journal of applied sciences and smart technologies volume 4, issue 2, pages 173 184 p-issn 2655-8564, e-issn 2685-9432 180 table 2 user's face identification no user 1st trial 2nd trial 3rd trial 1 user 1 dita unsuccessful successful successful 2 user 2 jahrona unsuccessful successful successful 3 user 3 della succesfull successful successful 4 user 4 ira unsuccessful successful successful 5 user 5 vemmi unsuccessful successful successful 6 user 6 ferahma unsuccessful successful successful 7 user 7 ali successful successful successful 8 user 8 azhar successful successful successful table 2 shows the result of three times testing. there were several unsuccessful experiments. in the first experiment, several obstacles caused the camera and temperature sensor not to work optimally, such as excessive lighting effects, being too far from the camera position, and the temperature sensor. in the second and third experiments, data obtained that the maximum limit of lighting and distance has been determined. then, the system were tested in further distance to know how far the system can recognize user. the result shows in table 3. figure 5 user face identification in 2 meter away international journal of applied sciences and smart technologies volume 4, issue 2, pages 173 184 p-issn 2655-8564, e-issn 2685-9432 181 table 3 shows testing system when the user stand in two meters away from camera. system shows status unknown. whether the user was already registered the system could not recognize the face user. this testing shows that the camera only able to recognize face up to 1.5 meters away. 3.3 buzzer testing, speaker testing is used to ensure that the speakers can produce a clear sound. the following table 3 is the result of buzzer testing. table 3 buzzer testing no temperature categorize speaker 1 33°c normal speaker off 2 34°c normal speaker off 3 35°c normal speaker off 4 36°c normal speaker off 5 >37,5°c high speaker on based on the data in table 3, if the detected temperature is at an average temperature, the speaker is "off" or off. furthermore, if the detected temperature is above the average temperature or >37.5°c, the speaker is active and gives instructions as a warning. 3.4 database testing in database testing, several experiments were carried out to determine how the database system stores data. the results of the database test can be seen in figure 5. international journal of applied sciences and smart technologies volume 4, issue 2, pages 173 184 p-issn 2655-8564, e-issn 2685-9432 182 figure 5. data testing result in this test, faces that have done training can be saved directly to the database. if people who have not been identified or have been identified but are not suitable in facing the room on faces that previously did data training. the database will store the results of face detection and body temperature of employees who enter and will go home when the employee enters the office and when the employee will go home. 4 conclusions the results of testing the tool as a whole show that the tool works according to its function. the camera only able to recognize face up to 1.5 meters away. buzzer as a sign of a user who has a fever will sound at a body temperature above 37.5°c. from the six sensor tests, there are slight differences between the sensor readings and the mlx and non-contact infrared thermometer. the temperature measured on the mlx sensor is lower than the temperature on the infrared contact thermometer because the temperature detection distance is too far from the sensor causing the temperature sensor to be less accurate when reading body temperature. meanwhile, in the first facial identification experiment, there were still 2 failures out of 8 trials. this is because the location where the face was shot makes the camera get too much exposure. after the camera was moved to a more suitable location, the results obtained were successful in the second and third experiments for 8 users. international journal of applied sciences and smart technologies volume 4, issue 2, pages 173 184 p-issn 2655-8564, e-issn 2685-9432 183 references [1] republic indonesia, "republic of indonesia government regulation no. 30 year 2019", 1945. [2] m. hernandez-de-menendez, r. morales-menendez, c. a. escobar, and j. arinez, "biometric applications in education", int. j. interact. des. manuf., 15(2– 3), 365–380, 2021. [3] s. c. hoo and h. ibrahim, "biometric-based attendance tracking system for education sectors: a literature survey on hardware requirements", j. sensors, 2019, 7410478, 2019. [4] v. p. singh, s. srivastava, and r. srivastava, "automated and effective contentbased image retrieval for digital mammography", j. xray. sci. technol., 26, 29– 49, 2018. [5] k. okereafor, i. ekong, i. okon markson, and k. enwere, "fingerprint biometric system hygiene and the risk of covid-19 transmission", jmir biomed. eng., 5(1), e19623, 2020. [6] f. susanto, f. fauziah, and a. andrianingsih, "lecturer attendance system using face recognition application an android-based", j. comput. networks, archit. high perform. comput., 3(2), 167–173, 2021. [7] t. e. prabowo, r. hartanto, and s. wibirama, “prototype of student attendance application based on face recognition using eigenface algorithm,” ijitee (international j. inf. technol. electr. eng., 3(1), 23, 2019. [8] a. ghoshal, a. aspat, and e. lemos, “opencv image processing for ai pet robot,” international journal of applied science and smart technologies (ijasst), 3(1), 65–82, 2021. [9] widiharso, "teknik dasar elektronika telekomunikasi. kementerian pendidikan dan kebudayaan", 2013. [10] health minister of the republic of indonesia, "decree of the health minister of the republic of indonesia number hk.01.07/menkes/382/2020 about international journal of applied sciences and smart technologies volume 4, issue 2, pages 173 184 p-issn 2655-8564, e-issn 2685-9432 184 community health protocol in public places and facilities for the prevention and control of corona virus disease 2019 (covid-19)". indonesia, 2019. international journal of applied sciences and smart technologies international journal of applied sciences and smart technologies volume 3, issue 1, pages 1–10 p-issn 2655-8564, e-issn 2685-9432 1 frequency distribution fitting for electronic documents arockia david roy kulandai1,2,* 1department of computer science, klingler college of arts and sciences, marquette university, milwaukee, wisconsin, u.s.a. 2st. xavier’s college (autonomous), ahmedabad, gujarat, india *corresponding author: david.roy@marquette.edu (received 19-09-2020; revised 21-09-2020; accepted 21-09-2020) abstract studies of frequency distributions of natural language elements have identified some distributions that offer a good fit. using electronic documents, we show that some of these distributions cannot be used to model the frequency of bytes in electronic documents even if these documents represent natural language documents. keywords: frequency fitting, quantitative linguistics, phase change memories 1 introduction mathematical linguistics has studied the frequency of phonemes, words, and graphemes in natural languages. at its best, this work is used to obtain linguistical insights or even applications. for example, the flesch reading ease index [1] uses a combination of average word length and average sentence length. best [2] still upholds its usefulness but notices that word length and sentence length are only indirectly related to readability. our own motivation does not stem from linguistics but from the study of new non-volatile memory devices and their integration into future systems. we international journal of applied sciences and smart technologies volume 3, issue 1, pages 1–10 p-issn 2655-8564, e-issn 2685-9432 2 are interested in researching how to minimize bitflips in phase change memories (pcm) [3]. pcm are a new non-volatile memory technology that offer byeaddressability, very high density, non-volatility, high retention, and high capacity. unfortunately, pcm exhibit limited endurance. they use energy only while reading and writing, and usually writing consumes most of the energy. the number of bitflips caused by overwriting electronic documents of one kind by documents of the same kind depends on the encoding. for example, the web-browser cache contains html documents which could be placed in the same area of a pcm. to find good encodings, we want to model the frequency of graphemes in these documents [3]. the most frequent encoding for internet documents is utf-8 so that our graphemes are bytes. here, we apply the methods of mathematical linguistics to modelling the frequency of bytes. linguists are interested in language and graphemes are important as carriers of information on phonems. unlike linguistics, we are interested in the effects of storing graphemes instead of using them. this makes for important differences. for instance, a linguist is not likely to make a distinction between capital letters and non-capital letters. similarly, a linguist might conflate equivalent spellings, for example, the english and the us english versions of “tre” and “ter”, the recent abolition of the german letter “ß” in favor of “ss”, or even remove accents in spanish. linguistics has shown that the frequency distribution of graphemes can be modelled by one or two parameter distributions successfully. our results show that distribution fitting is less successful for bytes than for letters and phonems. our research has convinced us that modelling a broad category such as text documents using distributions and parameters fitted to one corpus does not translate to another corpus. evaluation of byte overwrites using these models are dangerous. fortunately, we did find an encoding strategy that leads to energy savings for a broad class of electronic documents [3]. 2 research methodology we observed that encoding, e.g. utf-8, utf-16, ascii, has a strong impact on the number of bits over-written when string text based electronic documents. this translates immediately into energy savings because each bit over-write costs energy. also, each international journal of applied sciences and smart technologies volume 3, issue 1, pages 1–10 p-issn 2655-8564, e-issn 2685-9432 3 bit-write is potentially destructive of the cell. we, therefore concentrated on html files stored, for example, in a browser’s cache and to a lesser extent on text files. for comparison, with results in linguistics [1], [4], [5], [6] we also extracted pure text content from html files by gathering long text between the paragraph meta tag if the text was at least 50 bytes long. this excludes instances where the webpage used a paragraph meta tag only as a structural element. we also only processed letters and did not include punctuations or space. we collected corpora from internet newspaper articles, wikipedia, and the project gutenberg library of books in four european languages namely, english, german, spanish and french. each corpus contained at least 10 mb of raw data. we gathered ten corpora for english and five each for the other languages. for each corpus, we then calculated the frequency of each letter in the language or the frequency of each possible byte. we then fitted various distributions proposed in the linguistics literature to the frequency tables we obtained. for fitting we used python’s scipy module. we minimized the relative sum of squared differences between the ordered relative frequency of the letters or bytes and the prediction by the distribution. since the distribution has one, two, or three parameters, this means minimizing a function of one, two, or three variables. for each distribution, and for each of the 25 corpora, we tabulated the best fitting parameters and the goodness of fit for bytes. 3 distributions zipf is an ancestor of modern quantitative linguistics, but the distribution named after him is also used almost as a default when modelling uneven usage of resources or uneven sizes in computer science. he ranked words in descending order of frequency of occurrence and observed that the frequency of the 𝑖𝑡ℎ word is proportional to 1 𝑖⁄ . thus, we fit an ordered array of 𝑛 descending frequencies with an array: [𝛼 1⁄ , 𝛼 2⁄ , 𝛼 3⁄ , … , 𝛼 𝑛⁄ ] where 𝛼 is chosen so that the array sums up to one, which means that 𝛼 is the inverse of the 𝑛𝑡ℎ harmonic number. over time, many other distributions have been proposed to international journal of applied sciences and smart technologies volume 3, issue 1, pages 1–10 p-issn 2655-8564, e-issn 2685-9432 4 model frequency of elements in natural languages. in his later works, zipf generalized his distribution, matching the ordered array of 𝑛 descending frequencies with [𝛼 1𝛽⁄ , 𝛼 2𝛽⁄ , 𝛼 3𝛽⁄ , … , 𝛼 𝑛𝛽⁄ ], where 𝛽 is a parameter of the text and 𝛼 is calculated from 𝛽 and the length 𝑛 of the frequency array because the probability density function (pdf) needs to sum up to 1. with other words, the frequency of the 𝑖𝑡ℎ most frequent item, denoted by 𝑓𝑖, is 𝑓𝑖 ∼ 1 𝑖𝛽 ⁄ , where ~ denotes proportionality. this distribution is also known as the power law distribution. mandelbrot generalized the zipf distribution by adding a second independent parameter 𝛾 so that 𝑓𝑖 ∼ 1 (𝑖 + 𝛾)𝛽⁄ . the good distribution [7] is a parameter-less distribution where 𝑓𝑖 ∼ ∑ 1 𝑗 . 𝑛 𝑗=𝑖 we parameterize the good distribution by setting 𝑓𝑖 ∼ ∑ 1 𝑗 𝛼 . 𝑛 𝑗=𝑖 in addition, we went through a list of distributions given by li and miramontes [5]. exponential: 𝑓𝑖~exp (−𝛼𝑖) logarithmic: 𝑓𝑖~1 − α log (𝑖) quadratic logarithmic: 𝑓𝑖~1 − α log(𝑖) − 𝛽 (log(𝑖)) 2 weibull: 𝑓𝑖~ log ((𝑛 + 1) /𝑖) 𝛼 cocho – beta: 𝑓𝑖~(𝑛 + 1 − 𝑖) 𝛽 /𝑖𝛼 frappat: 𝑓𝑖~𝛽𝑖 + exp(−𝛼𝑖) yule: 𝑓𝑖~𝛽 𝑖 /𝑖𝛼 menzerath-altmann: 𝑓𝑖~ exp (− 𝛽 𝑖 ) /𝑖𝛼 the actual value of the pdf of a distribution with 𝑓𝑖~𝜓(𝑖, 𝛼, 𝛽) is 𝑐(𝜓(𝑖, 𝛼, 𝛽)), where 1/𝑐 is equal to ∑ 𝜓(𝑖, 𝛼, 𝛽)𝑛𝑖=1 . the purpose of 𝑐 is to ensure that the pdf sums up to 1. international journal of applied sciences and smart technologies volume 3, issue 1, pages 1–10 p-issn 2655-8564, e-issn 2685-9432 5 𝑛 is the number of symbols obtained, namely, 𝑛 = 256 for bytes and 𝑛 = the number of symbols in a language. we are following the notation of li and miramontes [5], which has idiosyncrasies. for some values of 𝛽, yule and menzerath-altmann are virtually indistinguishable. what li and miramontes call the yule distribution is in fact not the well-known yule-simon distribution. the yule-simon distribution would have 𝑓𝑖~𝛼𝐵(𝑖, 𝛼 + 1), where 𝐵 is the beta function, but is not suited for frequency matching. 4 results there are two criteria for a distribution fit for modelling. most importantly, the distribution should predict the frequency well. we measure this by calculating the sum of the differences squared and dividing it by the number of symbols. the number of symbols 𝑛 is equal to 256 when we process raw documents, consisting of bytes. for text, it is just the total number of letters that can appear. to allow comparisons between text and raw data, we divide by 𝑛. figure 1. distribution of the parameter for fitted one parameter distributions. the second criterion is good clustering of the parameters. if two different corpora can be fitted well to the same distribution but with widely different parameters, then either we have too many parameters or the parameters are specific to one corpus. in the first case, we are better off with a distribution with less parameters and in the second case international journal of applied sciences and smart technologies volume 3, issue 1, pages 1–10 p-issn 2655-8564, e-issn 2685-9432 6 the distribution with these parameters does not generalize and is not suitable for modelling. for one parameter distributions, the fitted parameters lie close together and often in bands determined by the language, figure 1. only the parameters for german raw documents are more spread out in the case of the weibull distribution and the exponential distribution. in figure 1, we plotted the sole parameter along the 𝑥-axis multiplying the parameter for the logarithmic distribution by 10 and the parameter for the exponential distribution by 20. because the best fitting parameters in general appear in small ranges with sometimes differences between the languages, we conclude that modeling byte distribution with a single parameter will apply across a broad spectrum of corpora as long as they are in the same language. figure 2. parameters for zipf mandelbrot for text (left) and raw html (right) corpora. figure 3. parameters for yule for text (left) and raw html (right) corpora. international journal of applied sciences and smart technologies volume 3, issue 1, pages 1–10 p-issn 2655-8564, e-issn 2685-9432 7 figure 4. parameters for cocho-beta for text (left) and raw html (right) corpora. figure 5. parameters for menzerath-altmann for text (left) and raw html (right) corpora. figure 6. parameters for quadratic logarithmic for text (left) and raw html (right) corpora. international journal of applied sciences and smart technologies volume 3, issue 1, pages 1–10 p-issn 2655-8564, e-issn 2685-9432 8 table 1. range and average of goodness of fits for distributions and language corpora. e n g li sh method range of fits (text) text avg range of fits (raw) raw avg zipf 0.008573 0.009917 0.009379 0.005736 0.008065 0.006442 good 0.004025 0.004987 0.004601 0.003446 0.005398 0.004305 logarithmic 0.002510 0.002940 0.002745 0.002630 0.005295 0.004440 weibull 0.002129 0.002758 0.002481 0.001313 0.002383 0.001534 exponential 0.000649 0.000908 0.000795 0.000109 0.000258 0.000208 zipf-mandelbrot 0.000664 0.000901 0.000801 0.000115 0.000175 0.000143 yule 0.000649 0.000908 0.000795 0.000109 0.000167 0.000134 cocho-beta 0.000507 0.000620 0.000568 0.000109 0.000173 0.000146 quadratic log 0.060180 0.064961 0.062800 0.015871 0.023709 0.019974 menzerath-altmann 0.000627 0.000838 0.009440 0.000102 0.000160 0.000132 frappat 0.000651 0.000814 0.009440 0.000109 0.016701 0.004980 g e rm a n method range of fits (text) text avg range of fits (raw) raw avg zipf 0.003868 0.004299 0.004149 0.001605 0.002684 0.002095 good 0.001315 0.001661 0.001509 0.000785 0.001904 0.001209 logarithmic 0.004030 0.004748 0.004310 0.005799 0.021800 0.013460 weibull 0.000444 0.000653 0.000521 0.000511 0.006085 0.003124 exponential 0.001871 0.002235 0.002009 0.003151 0.016770 0.009970 zipf-mandelbrot 0.001061 0.001414 0.001194 0.001236 0.002684 0.001882 yule 0.000431 0.000649 0.000515 0.000401 0.002671 0.001493 cocho-beta 0.000323 0.000500 0.000392 0.000344 0.002625 0.001416 quadratic log 0.067545 0.071953 0.069897 0.028288 0.059766 0.044443 menzerath-altmann 0.000431 0.000649 0.009389 0.000401 0.002671 0.001493 frappat 0.001353 0.001763 0.009389 0.002006 0.004427 0.003596 method range of fits (text) text avg range of fits (raw) raw avg s p a n is h zipf 0.011111 0.011337 0.011283 0.006152 0.006370 0.006254 good 0.006357 0.006546 0.006494 0.00422 0.004456 0.004333 logarithmic 0.004410 0.004520 0.004474 0.004648 0.005131 0.004877 weibull 0.003246 0.003374 0.003314 0.0014000.001472 0.001435 exponential 0.001116 0.001281 0.001193 0.000163 0.000208 0.000184 zipf-mandelbrot 0.001123 0.001297 0.001195 0.000063 0.000114 0.000085 yule 0.001116 0.001281 0.001193 0.000095 0.000126 0.000111 cocho-beta 0.000817 0.000972 0.000891 0.000135 0.000159 0.000147 quadratic log 0.074113 0.074588 0.074396 0.018778 0.020212 0.019447 menzerath-altmann 0.001068 0.001248 0.011604 0.000095 0.000126 0.000110 frappat 0.001021 0.001183 0.011604 0.000094 0.000149 0.000117 method range of fits (text) text avg range of fits (raw) raw avg f re n c h zipf 0.010303 0.010482 0.010381 0.006304 0.006504 0.006358 good 0.006042 0.006157 0.006093 0.004383 0.004586 0.004442 logarithmic 0.005219 0.005289 0.005244 0.005038 0.005300 0.005148 weibull 0.003713 0.003848 0.003778 0.001451 0.001534 0.001473 exponential 0.002739 0.002869 0.002795 0.000165 0.000203 0.000181 zipf-mandelbrot 0.002720 0.002860 0.002776 0.000104 0.000128 0.000113 yule 0.002677 0.002797 0.002736 0.000093 0.000120 0.000102 cocho-beta 0.002216 0.002328 0.002270 0.000109 0.000141 0.000120 quadratic log 0.074867 0.075727 0.075307 0.020078 0.020616 0.020275 menzerath-altmann 0.002677 0.002797 0.012376 0.000093 0.000120 0.000102 frappat 0.002656 0.002783 0.012376 0.000129 0.016198 0.003352 international journal of applied sciences and smart technologies volume 3, issue 1, pages 1–10 p-issn 2655-8564, e-issn 2685-9432 9 for two parameter distributions, the situation is more difficult. in some cases, such as the zipf-mandelbrot distribution, figure 2, language specific parameters are nicely clustered by language if we only look at text. if, however, we look at raw text, then the english cluster dissolves. for the five german corpora, the parameters are too widely distributed for text and raw files. we attribute this to over-fitting, a phenomenon well known from machine learning. fitting zipf-mandelbrot “learns” the corpus but not the general category. in addition, we observe that the parameters for raw html lie along a line, indicating a linear relationship between the two parameters. this indicates that the distribution should be made into a one-parameter distribution. in fact, as can be seen from table 1, the goodness of fits for zipf-mandelbrot is better than for the zipf distribution but still at the worst range of two parameter distributions. similarly, the parameters for menzerath-altmann are nicely clustered for text but lie on a onedimensional curve for the raw corpus. for the quadratic logarithmic distribution, again the results differ between text and raw corpora. for this reason alone, a number of distributions suitable in linguistics are not suitable to model byte frequencies. we refer readers to see figures 2-6 for the details of our illustration results. 5 discussion our interest is not in linguistics but modelling the overwriting of non-volatile memory. therefore, our frequency tables make a distinction between capital and noncapital letters. for a linguist, this distinction is probably artificial. also, unlike for example, li and miramontes [5], we do not conflate the letters that differ only in an accent or umlaut because they are encoded differently even though they can be considered the same letter. we gave results for texts as a comparison point for raw data. for example, we learned that some distributions such as zipf-mandelbrot overfit for raw data and are therefore probably useless for analytics while this does not happen for text. overall, just as in the work of li and miramontes, the cocho-beta distribution and the yule distribution allow best fits without the overfitting phenomenon. among single parameter distributions the zipf or power law distribution does not fare so well as it is outperformed by the exponential distribution and by the parametrized good distribution. international journal of applied sciences and smart technologies volume 3, issue 1, pages 1–10 p-issn 2655-8564, e-issn 2685-9432 10 6 conclusion frequency modelling of bytes in electronic documents can be done with the exponential distribution. while a better fit can be achieved with the menzerath altmann distribution or the cocho-beta distribution, their parameter range is not only language but also corpus specific. it is hard to see how scientific conclusions can be obtained with such variety. when restricted to text, our observation is not valid. references [1] r. flesch. “a new readability yardstick.” journal of applied psychology, 32 (3), 221, 1948. [2] k. h. best. “sind wort-und satzlänge brauchbare kriterien zur bestimmung der lesbarkeit von texten? in: wichter, sigurd/busch, albert (eds.) wissenstransfer erfolgskontrolle und rückmeldungen aus der praxis.” peter lang verl, frankfurt, 2006. [3] a. d. r. kulandai and t. schwarz. “content-aware reduction of bit flips in phase change memory.” ieee letters of the computer society, 2020. [4] b. krevitt and b. griffith. “a comparison of several zipf-type distributions in their goodness of fit to language data.” journal of the american society for information science, 23 (3), 220, 1972. [5] w. li and p. miramontes. “fitting ranked english and spanish letter distribution in u.s and mexican presidential speeches.” journal of quantitative linguistics, 18 (4), 359–380, 2011. [6] c. manning and h. schütze. “foundations of statistical natural language processing.” mit press, 2003. [7] h. pande and h.s. dhami. “mathematical modelling of occurrence of letters and word's initials in texts of hindi language.” skase journal of theoretical linguistics, 7 (2), 2010. international journal of applied sciences and smart technologies international journal of applied sciences and smart technologies volume 4, issue 2, pages 149–158 p-issn 2655-8564, e-issn 2685-9432 149 an enhanced multi-level authentication electronic voting system ayodeji o.j ibitoye1*, halleluyah o. aworinde1, esther t. adekunle1 1 computer science dept., bowen university, iwo, osun state, nigeria corresponding author: ayodeji.ibitoye@bowen.edu.ng (received 06-10-2022; revised 01-11-2022; accepted 14-11-2022) abstract originally, manual voting systems are surrounded with issues like results manipulation, errors and long result computation time, ineligible voters, void votes among others. electronic voting system helped in overcoming the challenges with manual voting system, to engendered other problems of phishing, men in the middle attack alongside voter’s impersonation. by these challenges, the integrity of an election results in a distributed system has become another top concern for e-voting system based on reliability. to achieve an improved voters’ authentication and result validation with excellent user experience, here, a facial recognition electronic voting system that is power-driven by blockchain technology was developed. the entire election engineering activities are decentralised with improved security features to enhance transparency, verifiability, and accountability for each vote count. the self-service voting system was built by smart contract and implemented on the ethereum network. the obtained reports and evaluations reflected a non-editable and self-sufficiently certifiable system for voting. it also has a competitive edge over fingerprint enabled e-voting system. aside it’s excellent usability and general acceptance, the developed method discarded to a larger extend, intended fraudulent actions from election activities by eliminating the involvement of a middleman while facilitating privacy, convenience, eligibility and satisfactory voters’ right. this work is licensed under a creative commons attribution 4.0 international license http://creativecommons.org/licenses/by/4.0/ international journal of applied sciences and smart technologies volume 4, issue 2, pages 149–158 p-issn 2655-8564, e-issn 2685-9432 150 keywords: blockchain technology, facial recognition, electronic voting system, smart contracts, ethereum network. 1 introduction the advent of democracy as a system of government, and the freedom of expression by the citizen through voting has become a norm in the process of selecting a leader all over the world. while, the right and power of citizen through voting without malpractice cannot be underestimated, voting remains a way of resolving conflicting and inconsistent viewpoints in major decision of policies and/or who leads over a period. although the processes may be controversial, depending on the adopted voting system, yet the unruffled decision becomes communal after each electorate has express his/her opinion [1]. for instance, candidates belonging to a political party are elected to the local government, state, or federal either for executive or legislative positions in nigeria over a period. with fairness and existing democratic rule as factors for consideration, the voting processes and context remains an evolving domain from manual system to electronic with the goal of building a credible, verifiable, transparent and integrity driven e-voting system. while the electronic system of voting is also filled with potential possibilities of denial-of-service attack (dos), server hacking, hardware malfunction, administrative manipulation among others, e-voting is being investigated widely, and countless operational models have been deployed and in used as stable solution for voters’ freedom and right across multifaceted faces of the society. by this development, e-voting remains the most optimal system of voting with capacity for innovative improvement on its shortfalls when compared with the manual systems of voting. hence, the process of improving openness, vote privacy, invulnerability and verifiability, especially in a decentralised voting ecosystem is more important today as technology keeps improving in proportionate direction with cybercrimes [2]. in more recent times, blockchain technology has been adopted for secured and transparent processes. for instance, the number of groups actions irrespective of volume and currencies can be tracked clearly and in real time despite the dispersed wallets in bitcoin. by this, a central authority is not required on a point to point based system and through the use of cryptographic approach, the system is kept secured and unbroken [3]. international journal of applied sciences and smart technologies volume 4, issue 2, pages 149–158 p-issn 2655-8564, e-issn 2685-9432 151 therefore, in e-voting, a decentralised blockchain system with end-to-end encryption can be used to address vote tampering, while promoting vote tallying in real time. however, it is important to embed such e-voting systems with biometric or computer vison infrastructure in order to curb human generated challenges of ineligibility, verifiability and impersonation in an e-voting cycle of registration, validation, collection and tallying. several biometric features such as fingerprint with blockchain technology has been used to build an e-voting system, the essence of this work is to fuse a facial recognition software with blockchain technology in e-voting system development. thus, in section two (2), an overview of related works on e-voting system is presented. section 3 discussed the multi-level e-voting system using facial recognition and blockchain technology. sample experimental evolutions and findings is presented in section four (4) before conclusion and recommendation in section five (5). 2 sample related works in a democratic system, voting remains an important tool for people to express their opinion regarding policies, choice of leadership and more. over the years, different methods have been integrated to define a sustainable election process. some of this include the use of paper wherein voters use pencil or thumb print on the preferred candidate. then, an optical scanner [4] or hand counting is used in tallying, and results are computed on microsoft excel sheet or calculated using modern calculator in some instances. as the world evolve with technology, some countries still trust the paper ballot voting system either for its disadvantage in order to satisfy self-interest or genuinely due to problems of digital infrastructure, change process or its advantages. as the technologically growth spans, an electronic system for recording, storing and processing voters’ data into digital form was developed. e-voting, as an instance uses digital ballot instead of the paper ballot; then, the entire voting cycle from registration to result computation is also performed electronically. for instance, a direct internet and electronic (dre) machines as a portable computer has been built for the exhibition of ballot choices from electronic recording votes. by pressing the touch screen, voters’ decision is authenticated [5] and vote audit of recount has the case may be is possible through the voter-verified (or verifiable) paper audit trail (vvpat). [6] also encoded fingerprints on smart card through secret splitting algorithm to develop an electronic biotechnology voting system (ebtvs), for voter’s verification. international journal of applied sciences and smart technologies volume 4, issue 2, pages 149–158 p-issn 2655-8564, e-issn 2685-9432 152 consequently, with the advent of blockchain technology, several e-voting systems have been developed overtime. [7] deployed an e-voting system, which uses a biometric device for validation during registration on a smart contract through an ethereum network. the application explores the latent use of decentralised networks to audit and understand electoral procedures. similarly, the application of blockchain technology for the deployment of a distributed e-voting system was appraised by [8]. it also recognised the legal and technological restrictions of using blockchain technology as a service for e-voting systems. in the bid to advance the need for blockchain technology in e-voting, [9] encouraged electoral involvement through a decentralised blockchain technology; the solution addresses vote tampering, upheld transparency in vote cast while protecting the voters. indeed, as the call for a decentralised voting system heightens, [10] projected a decentralised e-voting system with blockchain. the developed two-level architectural system provided a safe voting process that is exclusive of redundancy. the application also ensured that necessary voting criteria are satisfied for an anonymous but transparent vote count. subsequently, a permission driven multichain platform for voting was developed by [11]. the software allows a distinct administrator to arrange the blockchain based desired specification while voter’s distinctiveness was confirmed through a fingerprint recognition system to secure vote cast. aside, a crypto-voting system by [12] through two linked blockchains, [13] provided perquisite information to voters on the difficulties and risk associated with blockchain evoting system through an architecture trade-off analysis method (atam). computational cryptographic trust has proven to be more reliable than human trust in voting. by this and more, smart contact was also deployed by [14] with to goal to address challenges with voting accuracy, security and privacy. futhermore, hashing was deployed by [15] to ascertain that each vote count while preserving the anonymity of the voter. this is related to the electronic voting protocols by [16], whose system ensures the security of the identity of every voter while recorded vote results are tamper-proof. no doubt, blockchain technology has continued to provide positive support for e-voting system with fingerprint technology. it is secured from human corruption, however, due to the fallouts of fingerprint technology and the fast adoption of facial recognition systems, a blockchain distributed facially recognised e-voting system is presented in section three (3) for consideration. international journal of applied sciences and smart technologies volume 4, issue 2, pages 149–158 p-issn 2655-8564, e-issn 2685-9432 153 3 multi-level authentication e-voting system the object detection model used here is yolo v3. the raspberry pi, due to its limited computing power, cannot be used to implement the original yolo model. hence, we use yolo tiny, a tiny and yet efficient version of yolo. for the datasets we used google’s open images dataset v6. to train the model, we have to set the number of batches for the datasets so as to determine the iteration of training the model. the ideal number of iterations should be 2000 batches per number of objects. in our case we are dealing with 7 objects so the batches should be 14,000. while the model is training the prime objective of the algorithm is to decrease the average loss. we started with average loss of 4.5 and reached to an average loss of 1.08. figure 1 shows the graph depicting the decrease in average loss as the number of iteration increases. the electronic facial recognition e-voting system uses ganache blockchain technology to manage voters and administrative activities in a voting cycle. the voting procedures are deployed using smart contract on an ethereum network (blockchain). the architecture of this system as presented in figure one (1) showed how several modules of the system synergises to attain the goal of authentic, verifiable, invulnerable, convenience and transparent voting system. figure 1. facial recognition e-voting system using blockchain international journal of applied sciences and smart technologies volume 4, issue 2, pages 149–158 p-issn 2655-8564, e-issn 2685-9432 154 initially, from the available dataset, voters who do not satisfactorily meet the voting criteria were eliminated to avoid data repetition and inconsistency. then, 215 voters with the right voting criteria validated their preregistered information to obtain an ethereum private key as a vote accreditation certificate. at this instance, multiple user images were captured from different angles for a more detailed training set. by this, the voters’ data, which also included their faceprint are matched and stored in the database. from figure one above, the face geometry, which included the users, length of the jawline, the form cheekbones, the depth eye sockets, and more were captured and analysed, using the local binary pattern (lbp) algorithm. in order to vote, the voter’s login to the mobile application; first, with the unique username and password before validating the user face with the embedded lbp facial recognition algorithm as a twofactor authentication procedure. sequel to the existing training and preprocessing procedures for registered images, the face of the voter is scanned to ascertain a matching degree from the array of images in the database whenever the voting system is being used, as the voters face is presented, the following procedures takes effect: 1. lbps are extracted before the resulting histograms from each of the cells are weighted and concatenated differently just like the training data. 2. k-nn (with k=1) is subsequently performed with the 𝑥2 distance with the goal of finding the closest face in the e-voting training data. 3. the user face print is associated with the smallest 𝑥2 distance in the final classification if found. these extracted features as further illustrated in figure two (2) is packaged to activate face remembrance at every authentication and validation protocol call. this is possible through direct conversion of facial analog information to a digital equivalence tagged faceprint. then, features are extracted from the faces through vector point analysis before the classification is obtained. while each faceprint is unique to an individual just like the thumbprint, face validation activities are executed to match and compare the users face with the stored information during capturing for further decision international journal of applied sciences and smart technologies volume 4, issue 2, pages 149–158 p-issn 2655-8564, e-issn 2685-9432 155 support before a voter or administrator access the developed e-voting system. therefore, if there exists a match, the voter is granted approval to cast a vote by choosing the choice candidate then clicking the button “vote”. in addition, voters can also ascertain that his/her vote decision is not manipulated in an ongoing electoral process. once schedule election period is over, vote collection takes place before tallying. then. the administrator announces the vote results with satisfactory evidence. figure 2. block diagram of a facial recognition system. 4 evaluation report the developed blockchain e-voting system with face recognition achieved excellence when evaluated based on the principles of: a. eligibility: the ability to allow only registered voters to vote just once based on the defined electoral procedures. thus, with the two-factor authentication, only authorised voters can access the voting system. international journal of applied sciences and smart technologies volume 4, issue 2, pages 149–158 p-issn 2655-8564, e-issn 2685-9432 156 b. privacy: the ability to leverages cryptographic properties of blockchain towards ensuring that each vote is kept secret through vote hashing. c. verifiability: the ability of a voter to track vote statues in the tallying system through the unique ethereum id d. convivence: the ability of an eligible voter to vote easily without biases or discrimination e. usability: the ability to measure how well a user can use the application based on process and design flow. the following analysis as presented in figure 3 is obtained figure 3. sample voters evaluation report overall, the facial recognition voting system also helped in resolving the challenge of varying fingerprint patterns, which cannot be recreated or rendered. in addition, with international journal of applied sciences and smart technologies volume 4, issue 2, pages 149–158 p-issn 2655-8564, e-issn 2685-9432 157 the new normal where people are encouraged to touch less, a self-service remote face id verification can be achieved. however, voters disguising or wearing face shield remains a challenge for future development and not withing the scope of this research activities. 5 conclusion overall, the facial recognition voting system also helped in resolving the challenge of varying fingerprint patterns, which cannot be recreated or rendered. in addition, with the new normal where people are encouraged to touch less, a self-service remote face id verification can be achieved. however, voters disguising or wearing face shield remains a challenge for future development and not withing the scope of this research activities. references [1] y. xie, “who over reports voting?”, j j.am.polit.sci.rev. 80, 613–624, 2017. [2] c. ayo, o. daramola, a. azeta, “developing a secure integrated e-voting system”, in handbook of research on e-services in the public sector: egovernment strategies and advancements, igi global, usa, 278–287, 2011. [3] s. nakamoto, “bitcoin: a peer-to-peer electronic cash system” available: https://bitcoin.org/bitcoin.pdf, 2009. [4] m. rockwell, “bitcongress – process for block voting and law”, available: http://bitcongress.org/ [accessed: december 2017]. [5] a. ndem, “three risks posed by electronic voting the election network”, available: http://theelectionnetwork.com/2018/10/25/three-risks-posed-byelectronic-voting/, 2018. [6] o.o adeosun. ayodeji o.j. ibitoye, j.o alabi, “real time e-biotechnology voting system; using secret splitting”, international journal of electronics communication and computer engineering, 6(6), 2015. [7] a. benny, “blockchain based e-voting system”, ssrn electronic journal, 2020. http://bitcongress.org/ international journal of applied sciences and smart technologies volume 4, issue 2, pages 149–158 p-issn 2655-8564, e-issn 2685-9432 158 [8] f. p. hjalmarsson, g. k. hreioarsson, m. hamdaqa, g. hjalmtysson, “blockchain-based e-voting system”. ieee international conference on cloud computing, cloud, 2018-july, 983–986., 2018. [9] h. v. patil, k. g. rathi, m. v. tribhuwan, science, c., college, d. y. p. a. c. s., “a study on decentralized e-voting system using blockchain technology”, international research journal of engineering and technology (irjet), 5(11), 48–53, 2018. [10] k. isirova, a. kiian, m. rodinko, a. kuznetsov, “decentralized electronic voting system based on blockchain technology developing principals”, ceur workshop proceedings, 2608, 211–223, 2020. [11] k. k. sharma, j. raghatwan, m. patole, v. m. lomte, “voting system using multichain blockchain and fingerprint verification”, international journal of innovative technology and exploring engineering, 9(1), 3588–3597, 2019. [12] f., fusco, m. i. lunesu, f. e. pani, a. pinna, “crypto-voting, a blockchain based e-voting system”, ic3k 2018 proceedings of the 10th international joint conference on knowledge discovery, knowledge engineering and knowledge management, 3, 223–227, 2018. [13] d. thebus, o. daramola, “e-voting system for national elections using a blockchain architecture”, in pan african international conference on science, computing and telecommunications book of proceedings, university of swaziland, kwaluseni, swaziland, 2019. [14] k. sadia, m. masuduzzaman, r. k. paul, a. islam, “blockchain based secured e-voting by using the assistance of smart contract”, 2019. [15] r. suganya, a. sureshkumar, p. alaguvathana, s. priyadharshini, k. jeevanantham, “blockchain based secure voting system using iot”, international journal of future generation communication and networking, 13(3), 2134–2142. 2020. [16] c. c. z. wei and , c. c. wen, “blockchain-based electronic voting protocol”. international journal on informatics visualization, 2(4–2), 336–341, 2018. international journal of applied sciences and smart technologies international journal of applied sciences and smart technologies volume 2, issue 1, pages 67–74 p-issn 2655-8564, e-issn 2685-9432 67 application of klein-4 group on domino card asido saragih 1,* , santri chintia purba 2 1 department of mathematics, satya wacana christian university, salatiga, indonesia 2 department of educational mathematics, christian university of indonesia, jakarta, indonesia * corresponding author: saragihasido@gmail.com (received 04-11-2019; revised 03-02-2020; accepted 03-02-2020) abstract klein-4 or klein-v group is a group with four elements including identity. the binary operation on klein-4 group will produce identitiy if operated to it self and produce another non-identity element if operated to another non identity element. the focus of this paper is to explain the principle of klein4 group on domino card and also find all complete possible elements. domino card is a set of 28 cards which the surface of each card divided in two identic boxes contain combination of dots as pattern. the final part of this research is to find out all possible combination of cards which can be used as elements of a klein-4 groups. keywords: klein, domino 1 introduction group is a set of nonempty elements with a well define binary operation which satisfy conditions for binary operation are associative, has identity and inverse element, international journal of applied sciences and smart technologies volume 2, issue 1, pages 67–74 p-issn 2655-8564, e-issn 2685-9432 68 and close under binary operation [1]. we notice a group with binary operation as definition 1.1. let be a group. then is an abelian group if elements of are commutative under . the name of abelian group is actually was given for honors a norwegian mathematics niels hendrik abel [2] . further for our convenience, let us denote be repetitions of under the operation of binary operation . for example and let and are two numbers such that and . we construct an abreviation with equivalent meaning as to help us in simplification writing. definition 1.2. let be a group, and , with is the identity. we say is generator of if there exists a positive integer such that . in this case it is not surprise to make an understending that generate . furthermore, we say a positive integer be the order of if . definition 1.3. let be a group. then we say is a cyclic group if there exists an element and generate all elements of basically there are a lot of group we can construct. but there is one simple and interesting group named klein-4 group which also oftenly signed as stand for ”vier”, means ”four” in german and ”klein” means ”small” [4]. a klein-4 group or klein-v group is a group with four elements including identity. klein-4 with a binary operation and as identity, will produce if an element operated to it self and produce it self if operated with . if two different non-identity elements are operated by , will produce the another non-identity element. for example, let international journal of applied sciences and smart technologies volume 2, issue 1, pages 67–74 p-issn 2655-8564, e-issn 2685-9432 69 as a klein-4 group. then, the result of the operation over represented on table 1 below : table 1. klein-4 group b certainly, the simplicity concept of klein-4 group makes it become interesting to observe. collecting four object with a binary operation to represent klein-4 group would not be easy to define. however, group theorists tried to find klein-4 group applications in daily life. in order to reach that purpose, of course they need to find the generator elements to generate it. for example, samuel and mshelia [5] found one of the its application on a game called ”tsorry checkerboard” which consists of boxes with some rule and moving code. on another side rietman, robert and jack [6] explain about klein-4 group in genetic coding. especially for its identity concept, danckwerts and neubert [7] mention it on their paper regarding the symmetries of genetic code-doublets. theorem 1.1. let be a klein-4 group with e as identity. then the cyclic subgroup of is only contain identity e and another element such that maximum order of cyclic subgroup of is 2. proof. let p g. if , then is a trivial cyclic subgroup of if then . hence, for any we have . let be an odd number. then we always have such that . this fact allowed us to have , since = . therefore, is the generator of and then we have prove our assertion. international journal of applied sciences and smart technologies volume 2, issue 1, pages 67–74 p-issn 2655-8564, e-issn 2685-9432 70 domino cards is a set of 28 cards with every card’s surface devided in to two boxes and contain the combination of 0 up to 6 dots. we can represent the combination of dots on the card’s surfaces as paired of positive integer number as : (0, 0), (0, 1), (0, 2), (0, 3), (0, 4), (0, 5), (0, 6), (1, 1), (1, 2), (1, 3), (1, 4), (1, 5), (1, 6), (2, 2), (2, 3), (2, 4), (2, 5), (2, 6), (3, 3), (3, 4), (3, 5), (3, 6), (4, 4), (4, 5), (4, 6), (5, 5), (5, 6), (6, 6). we also need to remind that for domino card, where . hence there are no two or more than two cards in a set of domino cards consists the same number of dots on their surfces. we can look for this example in figure 1 below : (3, 5) (5, 3) figure 1. domino card further, in the next section we will discuss about the principle of klein-4 groups which applied to domino cards as our main discuss section. 2 klein-4 group principle on domino card suppose we have a klein-4 group with elements are four different domino cards. let us define the binary operation : deleting the same pattern on card. this binary operation will implies only be able to operate between two cards which have at least one side the same pattern. for example (6, 1) • (0, 6) = (0, 1) (3, 0) • (0, 2) = (2, 3) (6, 6) • (6, 6) = (0, 0) international journal of applied sciences and smart technologies volume 2, issue 1, pages 67–74 p-issn 2655-8564, e-issn 2685-9432 71 following klein-4 group definition, we need to have four different cards as elements including the identity. theorem 2.1. let be a klein-4 group with define as “deleting the same pattern on card”. then is the identity. proof. let be an arbitrary card element of such that . since we need to have a card which operated by to it self produces identity, then will implies deleting and deleting , hence we have . then we obtain our assertion. remark 2.2. remaining three elements of klein-4 group are defined as the combination of , where . as we stated to be our identity, here we attach all combinations of possible elements of klein-4 group on domino card with binary operation . we can see all the combination in table 2 below : table 2. combinations of all possible elements e a b c (0,0) (0,1) (1, 2) (0, 2) (1, 3) (0, 3) (1, 4) (0, 4) (1, 5) (0, 5) (1, 6) (0, 6) (0,2) (2, 3) (0, 3) (2, 4) (0, 4) (2, 5) (0, 5) (2, 6) (0, 6) (0,3) (3, 4) (0, 4) (3, 5) (0, 5) (3, 6) (0, 6) (0,4) (4, 5) (0, 5) (4, 6) (0, 6) (0,5) (5, 6) (0, 6) international journal of applied sciences and smart technologies volume 2, issue 1, pages 67–74 p-issn 2655-8564, e-issn 2685-9432 72 theorem 2.3. let be a non-identity element of . maximum order of is 2. proof. since is a non-identity of , then value of both of and are between 0 and 6 where and will not exists on the same card. this condition leads us to and . then we obtain our assertion. 3 conclusion klein 4 group is a simple group consists of four elements including one of them as identity. many researchers found its application in real life over many field outside of mathematics. this paper presented one more its application by using domino cards as elements by choosing one well define binary operation and many important supporting theorem. finally the porpose of this paper has been achieved. references [1] m. eie and s. t. chang, a course on abstract algebra, world scientific, singapore, 2009. [2] w. k. nicholson, introduction to abstract algebra fourth edition. wiley, usa, 2012. [3] j. a. gallian, contemporary abstract algebra eighth edition, brooks/cole cengage learning, usa, 2016. [4] j. minac and s. chebolu, “representations of the miraculous klein group”, ramanujan mathematics society newsletter, 22 (1), , 2012. [5] s. h. tsok and i. b. mshelia, “application of group theory to a local game called “tsorry checkerboard” (a case of klein fourgroup)”, iosr journal of mathematics, 7 (3), , 2013. international journal of applied sciences and smart technologies volume 2, issue 1, pages 67–74 p-issn 2655-8564, e-issn 2685-9432 73 [6] e. a. rietman, r. l. karp and j. a. tuszynski, “ review and application of group theory to molecular systems biology ”, theoretical biology and medichal modeling, 8 (21), , (2011). [7] h. j. danckwerts and d. neubert, “symmetries of genetic code-doublets”, journal of molecular evolution 5, ,(1975). international journal of applied sciences and smart technologies volume 2, issue 1, pages 67–74 p-issn 2655-8564, e-issn 2685-9432 74 this page intentionally left blank international journal of applied sciences and smart technologies international journal of applied sciences and smart technologies volume 4, issue 2, pages 241-254 p-issn 2655-8564, e-issn 2685-9432 241 this work is licensed under a creative commons attribution 4.0 international license optimization of vacuum forming parameter settings to minimize burning defect on strawberry packaging products using the taguchi method th. adi nugroho1,*, adhi setya hutama1 and perwita kurniawan1 1 manufacturing design study program, politeknik atmi surakarta jl. adisucipto / jl. mojo no.1 surakarta *corresponding author: adi.nugroho@atmi.ac.id (received 23-10-2022; revised 30-10-2022; accepted 02-11-2022) abstract packaging products are being developed by pt. atmi igi – center. the process of making these products uses the vacuum forming method. the packaging product is made of pp (polypropylene) and is used to pack strawberries. there are several problems during the production process, one of which is that there are still product defects burning or often called burn marks. these problems are caused by the machining parameters that have not been standardized, and based on trial eror or experience of the operator. the vacuum forming machine used in this production is the formech 508fs vacuum forming machine. the optimized parameters are heat zone, heat time, and stand by temperature. optimizing these parameters using the taguchi method, with an orthogonal array with 9 trials, and the selected signal to noise ratio (snr) is smaller is better. the response of this research is the reduction of burn mark product defects. the results showed that to reduce burn mark product defects, is to set the parameter heat zone 1 by http://creativecommons.org/licenses/by/4.0/ mailto:adi.nugroho@atmi.ac.id international journal of applied sciences and smart technologies volume 4, issue 2, pages 241-254 p-issn 2655-8564, e-issn 2685-9432 242 90%, heating zone 2 is 80%, heating time 45s, and stand by temperature 60%. based on this arrangement, the product defects obtained only reached 13%. pt. atmi igi-center is expected to be able to use these parameter settings for the strawberry packaging production process. keywords: vacuum forming, defect product, taguchi method, smaller is better 1 introduction pt atmi-igi center is a company that engaged in the manufacture of molding for plastics production and plastic product molding. beside producing mold and molding of plastic products with injection method, the company is currently also starting to try to get into the field of vacuum forming production is in the form of mold for strawberry packaging products made with aluminum material. production is carried out using the thermoforming process, which is apply the vacuum forming system in the formech 508fs machine. thermoforming is an industrial process in which a thermoplastic sheet (or film) is processed into a new shape using heat and pressure [1]. the problem that occurs in the production of strawberry packaging is that the product often experiences burning defects or often called burn marks, as shown in figure 1. burn marks are product defects, where the product changes colour such as burning, dimensions, and weight [2]. burn marks in the packaging process happen because there is no standardization for the operation of the parameters, so the operation is only based on the estimation and experience of the operator. this certainly affects the length of time and cost of the experiment because it is done by finding the right estimate of the number of parameter variables that can be used for production. on the formech 508fs engine, the machining parameters used are heating zones, heating time and stand-by temperature. also stated that the parameters on the vacuum forming machine that need to be optimized include temperature, heating time, and heating zone [3]. international journal of applied sciences and smart technologies volume 4, issue 2, pages 241-254 p-issn 2655-8564, e-issn 2685-9432 243 figure 1. burning problem on the strawberry packaging product this research was conducted focusing on optimization parameter setting, to find the parameter that optimal in overcoming the burning defect that often occurs in the production process of strawberry packaging. this research result later can be used for strawberry packaging production, and the method can be used to find optimal parameters of the next mold in the future produced by pt atmi-igi center. 2 methodology 2.1 research methodology 2.1.1 taguchi methods the taguchi method is used to optimize the parameters contained in the formech 508fs. the taguchi method is one of the methods in experimental design that can be used to control product quality, such as to obtain optimal parameters and material compositions [4] [5]. the taguchi method belongs to the doe (design of experiment) realm, in which there are two main variables, namely the response variable and the independent variable [6]. in this study, the manufacture of strawberry packaging used 3k factorial design, where at each level identify low, medium, and high. characteristics in search of strawberry packaging product parameters using the smaller is better type, namely the characteristics of quality with a value limit of 0 and non-negative so that the value of which is getting smaller or closer to zero is the value that desired [7]. in this study the independent variables consist of three variables, namely heating zones, stand-by parameters, and heating time, each variable has 3 levels, low, medium, international journal of applied sciences and smart technologies volume 4, issue 2, pages 241-254 p-issn 2655-8564, e-issn 2685-9432 244 high. determination of the value of the independent variable level for stand-by temperature and heating time according to machine manuals, work journals and manuals, by conducting initial experiments. the first step is to look at the temperature recommendation on tds (technical data sheet), parameter journal and the formech 508fs engine operating manual, which will be medium level with forecast best setting, as shown in figure 2. next, to determine the high and low levels is to add a value according to the range obtained in the initial experiment. results from initial experiments obtained parameter level for stand-by temperature 40% for low level (range -20% of the experiment) , 60% medium level, and 80% high level (range +20% of test). heating time is 45s (low level according to trial), 50s (medium level according to the manual use of the formech 508fs engine), and 55s (high level according to experiment). medium level heating zones already available in the initial settings (which is usually used for pp material) and high and low levels are obtained through initial trial. the heating zone used is 40%, 60%, and 80%. in zone 2 in heating zones the temperature increased by 10% due to the need to form more complicated clamping contour than the center of product. figure 2. heating zone formation on formech 508fs machine 2.1.2 orthogonal matrix the use of the taguchi method in design experiment based on orthogonal array (oa) in order to obtain the optimal amount of information with minimal trials. orthogonal array is a matrix of a number of rows and columns [8]. each column international journal of applied sciences and smart technologies volume 4, issue 2, pages 241-254 p-issn 2655-8564, e-issn 2685-9432 245 represents a certain factor or condition that can change from one experiment to another, and the row represents the level of the factor in the experiment being performed. selection of the type of orthogonal array used in the experiment depends on the number of degrees of freedom. determination of degrees of freedom means to see how many minimum numbers of experiments performed [9]. the formulation can be seen in the following equation: degrees of freedom = ∑ (𝑙𝑘 − 1) 𝑛 𝑘=1 (1) with, k = 1, 2, ... , n research conducted using 3 independent variables with each parameter having 3 levels. 2.1.3 experimental result analysis the analysis used to determine the effect relative of the various controls in this study is to use the analysis of the average (analysis of mean/ anom). anom or mean analysis, is used to looking for a combination of control parameters so that optimal results are obtained as desired [10]. 2.1.4 average value (mean) the average or complete average of calculations, for quantitative average contained in a sample calculated by dividing the number of data values by lots of data [11]. suppose there is a data distribution y1 , y2 , y3, ………..yn. so, the average is: �̅� = 𝑦1+𝑦2+𝑦3+⋯+𝑦𝑛 n = ∑ 𝑦𝑖 𝑛 𝑖=1 n (2) 2.2 data collection process 2.2.1 matrix orthogonal selection orthogonal matrix selection is done by minitab 15 software assistance, according to the number of independent variables/factors and levels used in study. research conducted using 3 independent variables with each parameter having 3 levels. the degrees of freedom (dof) obtained in this study are: international journal of applied sciences and smart technologies volume 4, issue 2, pages 241-254 p-issn 2655-8564, e-issn 2685-9432 246 dof = independent variable * (level-1) dof = 3 x (3-1) = 6 the value 6 of the degrees of freedom is the sum of minimum for the experiment to be carried out. study it uses an orthogonal array l9 which has the number of experiments is 9. based on these choices, on the taguchi design – design table on minitab, column l9 is selected. the research carried out took the l9(33) orthogonal matrix with a total of 9 trials. the table of variations of the orthogonal array matrix is shown in table 1 has been entered according to the variation of parameters. table 1 parameter variations no heating zones stand-by temp heating time cacat produk (%) 1 40 40 45 ... 2 40 60 50 ... 3 40 80 55 ... 4 60 40 50 ... 5 60 60 55 ... 6 60 80 45 ... 7 80 40 55 ... 8 80 60 45 ... 9 80 80 50 ... 2.2.2 vacuum forming trial process the materials used in this research are polypropylhene (pp) plastic sheet with size 530x470mm for each sheet. as for the machine used is a machine vacuum forming formech 508fs. the machine has prepared in advance with the heating stage heater with high heating parameters. then parameters according to orthogonal matrix are inputted and stored in machine memory. setting according to independent variable and level to the parameters of the heating zone, standby temperature and heating time. determination of assessment aspects is determined based on finished product requirements, initial trial results, and requirements list from pt atmi-igi center. aspect the assessment uses the percentage of damage, for qualifies taguchi smaller is international journal of applied sciences and smart technologies volume 4, issue 2, pages 241-254 p-issn 2655-8564, e-issn 2685-9432 247 better method. there are 5 aspects that are the basis for the assessment the success of an experimental specimen, namely burning/not burning, thickness, contour, angle and lock holes/pins. 2.3 experimental result data analysis assessment based on the assessment aspect is carried out after all parameter variations and all replications have been completed. the assessment data uses percent (%) units to mark the percentage of damage that occurs in vacuum forming printed products. analysis of experimental results was carried out using manual calculations in microsoft excel and confirmed by calculations on the minitab 15 software. 2.3.1 analysis on top section after the experiment was carried out, the average of the experimental results was obtained to determine the parameters that had the most influence on defects in the top part of the product. the experimental data are shown in table 2. table 2 top section experimental data no heating zones stand-by temp heating time cacat produk (%) 1 40 40 45 60% 2 40 60 50 47% 3 40 80 55 60% 4 60 40 50 60% 5 60 60 55 47% 6 60 80 45 47% 7 80 40 55 60% 8 80 60 45 7% 9 80 80 50 40% the calculation of the average manually for the response to product defects as follows: international journal of applied sciences and smart technologies volume 4, issue 2, pages 241-254 p-issn 2655-8564, e-issn 2685-9432 248 average for response in heating zones level 1 holding time lvl.1 = (60%+47%+60%)/3 = 56% then the average table for the product defect response is obtained (table 2.4) as for the calculations in the effect table, shows the variables with the greatest effect on product defects are as follows: variable effects of heating zones: highest response for means – lowest response for means 56%-36% = 20% then the magnitude of the effect on each factor is obtained as shown in table 3 below. table. 3 response for means (smaller is better) level heating zones stand-by temp heating time 1 56% 60% 38% 2 51% 33% 49% 3 36% 49% 56% efek 20% 27% 18% optimum level 3 2 1 figure 3. minitab calculation for response for means international journal of applied sciences and smart technologies volume 4, issue 2, pages 241-254 p-issn 2655-8564, e-issn 2685-9432 249 figure 3 above shows the results of the average graph (means) in the top section which is processed using minitab 15 software. parameter heating zone the means graph shows the lowest value is at level 3, which is 36%. while the standby temperature parameter shows the lowest value is at level 2, which is 33%. then the heating time parameter shows the lowest value is at level 1, which is 38%. 2.3.2 analysis on base section after the experiment was carried out, the average of the experimental results was obtained to determine the parameters that had the most influence on defects in the base part of the product. the experimental data are shown in table 4. the calculation of the average manually for the response to product defects as follows: average for response in heating zones level 1 holding time lvl.1 = (20%+33%+60%)/3 = 38% then the average table for the product defect response is obtained (table 5) table 4 base section experimental data no heating zones stand-by temp heating time cacat produk (%) 1 40 40 45 20% 2 40 60 50 33% 3 40 80 55 60% 4 60 40 50 60% 5 60 60 55 60% 6 60 80 45 33% 7 80 40 55 33% 8 80 60 45 13% 9 80 80 50 40% as for the calculations in the effect table, shows the variables with the greatest effect on product defects are as follows: variable effects of heating zones: highest response for means – lowest response for means international journal of applied sciences and smart technologies volume 4, issue 2, pages 241-254 p-issn 2655-8564, e-issn 2685-9432 250 51%-29% = 22% then the magnitude of the effect on each factor is obtained as shown in table 5 below. table 5. response for means (smaller is better) level heating zones stand-by temp heating time 1 38% 38% 22% 2 51% 36% 44% 3 29% 44% 51% efek 22% 9% 29% optimum level 3 2 1 figure 4. minitab calculation for response for means figure 4 above shows the results of the average graph (means) in the top section which is processed using minitab 15 software. parameter heating zone the means graph shows the lowest value is at level 3, which is 29%. while the standby temperature parameter shows the lowest value is at level 2, which is 36%. then the heating time parameter shows the lowest value is at level 1, which is 22%. international journal of applied sciences and smart technologies volume 4, issue 2, pages 241-254 p-issn 2655-8564, e-issn 2685-9432 251 3 results and discussions 3.1 research result on top section the results of processing experimental data and after the validation process is carried out, it shows that parameters that result in a formable product according to predetermined aspects and does not occur defect burning is parameter no.8. process parameters vacuum forming for experiment no.8 has setting as follows: heating zone = level 3 = 80% standby temp. = level 2 = 60% heating time = level 1 = 45s table 2 shows the experimental data where parameter no.8 has a product defect percentage rate the smallest is 7%. 3.2 research result on base section the results of data processing experimental results and after the validation process is carried out, it shows that parameters that result in a formable product according to predetermined aspects and does not occur defect burning is parameter no.8. process parameters vacuum forming for experiment no.8 has setting as follows: heating zone = level 3 = 80% standby temp. = level 2 = 60% heating time = level 1 = 45s table 4 shows the experimental data where parameter no.8 has a product defect percentage rate the largest is 13%. 3.3 validation of research result validation of research results from parameter recommended showing results that have the most perfect formation and still can maintain thickness and there are no parts which has defect burning. finished products that have been cut according to the part used as strawberry package and already assembled like shown in figure 5, table 4 shows the experimental data where parameter no.8 has a product defect percentage rate the largest is 13%. international journal of applied sciences and smart technologies volume 4, issue 2, pages 241-254 p-issn 2655-8564, e-issn 2685-9432 252 figure 5. assembling result on validation product 4 conclusions based on the research that has been done, then can be concluded as follows: • optimal parameter setting to minimize product defects burning and able to shape the product during the vacuum forming process on the formech 508fs machine for strawberry packaging products on the top is a combination of parameters in the heating zone by 80%, standby temperature by 60%, and heating time by 45s which resulted in an average percentage of product defects of 7%. as for the base part is with a combination of parameters in the heating zone of 80%, standby temperature of 60%, and heating time of 45s which results in an average percentage of product defects of 13%. • setting parameters that affect the process of making strawberry packaging products from the top section on burning product defects and being able to shape the product based on the results of calculations carried out, contained in table 5.5 are standby temperature which has an effect of 27%, then there is heating zone which has an effect of 20%, and followed by heating time which has an effect of 18%. meanwhile, for setting parameters that affect the process of making strawberry packaging products from the base section on burning product defects and being able to form products based on the results of calculations carried out, contained in table 5.7 is heating time which has an effect of 29%, then there is heating zone which has effect by 22%, followed by standby temperature which has an effect of 9%. international journal of applied sciences and smart technologies volume 4, issue 2, pages 241-254 p-issn 2655-8564, e-issn 2685-9432 253 references [1] p.w. klein, “fundamentals of plastics thermoforming”, morgan & claypool publisher, ohio, 2009 [2] k.a. widi and l.d. ekasari, “studi analisa pengembangan produk limbah plastik berbasis tekanan teknologi injection moulding”. jurnal flywheel, 8(2), 14-18, 2017 [3] c.r. permana, c. budiyantoro, and b. prabandono, “manufaktur dan uji kinerja proses vacuum forming untuk bahan polymethyl methacrylate (pmma)”, jurnal material dan proses manufaktur, 3(1), 1-9, 2019 [4] a. nugroho, a.s. hutama, and c. budiyantoro, “optimasi keakuratan dimensi dan kekasaran permukaan potong material akrilik dengan proses laser menggunakan metode taguchi dan pcr-topsis”, jurnal material dan proses manufaktur, 2(2), 75-82, 2018 [5] a.s.hutama, p. kurniawan, a. nugroho, and w.s. hayu, “studi karakteristik mekanis material limbah polypropilene (pp) untuk pembuatan produk cone benang dengan penambahan material kalsium karbonat”, jurnal rekayasa sistem industri, 11(1), 101-107, 2022 [6] a. nugroho, a.s. hutama, “metode taguchi pcr topsis untuk optimasi energi dan kecepatan grafir mesin laser”, politeknisains, 18(1), 6-11, 2019 [7] f.k.a. nugraha, “shrinkage of biocomposite material specimens [ha/bioplastic/serisin] printed using a 3d printer using the taguchi method”, international journal of applied sciences and smart technologies, 4(1), 8996, 2022 [8] h.a. pamasaria, t.h. saputra, a.s. hutama, and c. budiyantoro, “optimasi keakuratan dimensi produk cetak 3d printing berbahan plastik pp daur ulang dengan menggunakan metode taguchi”, jurnal material dan proses manufaktur, 4(1), 12-19, 2020 [9] n. sunengsih. s. winarni, t.g. amazaina, “kajian terhadap metode taguchi topsis pada optimasi multirespon”, seminar statistika fmipa unpad. bandung, 2017 international journal of applied sciences and smart technologies volume 4, issue 2, pages 241-254 p-issn 2655-8564, e-issn 2685-9432 254 [10] w.s. hayu, “optimasi parameter injection molding untuk mengurangi cycle time dan berat produk cone benang dengan metode taguchi”, tugas akhir, politeknik atmi. surakarta. [11] r.e. walpole, “pengantar statistika”, pt. gramedia, jakarta, 1995 international journal of applied sciences and smart technologies international journal of applied sciences and smart technologies volume 4, issue 1, pages 109–120 p-issn 2655-8564, e-issn 2685-9432 109 this work is licensed under a creative commons attribution 4.0 international license the influence of artificial aging on tensile properties of al 6061-t4 freddy. s. r. taebenu1, heryoga winarbawa1, rines1, budi setyahandana1 and i. m. w. ekaputra1, * 1mechanical engineering department, faculty of science and technology, sanata dharma university, yogyakarta *corresponding author: made@usd.ac.id (received 22-05-2022; revised 27-05-2022; accepted 28-05-2022) abstract this paper presents an explanation related to experimental testing in the form of the tensile properties of al 6061. al 6061 was heat treated by the precipitation hardening method. the precipitation hardening consisted of t4 and t6 treatment. al 6061 samples were heat-treated at a temperature of 430°c for 2 hours, then cooled slowly at room temperature. the t4 was conducted at a temperature of 530°c for 2 hours, followed by rapid cooling in a water medium and natural ageing at a temperature of 70°c for six days. temperature t6 is the final process of applying precipitation hardening treatment to al 6061. temperature t6 is carried out at 530°c for 2 hours, then cooled rapidly in a water medium and continued with artificial ageing at 190°c with a variation of ageing time for 3 hours, 5 hours, and 7 hours. the effect of the applied treatment was observed to increase the maximum strength value of the tensile test of al 6061. keywords: al 6061, precipitation hardening, ultimate tensile strength http://creativecommons.org/licenses/by/4.0/ international journal of applied sciences and smart technologies volume 4, issue 1, pages 109–120 p-issn 2655-8564, e-issn 2685-9432 110 1 introduction for over fifty years, aluminium (al) alloy has ranked second after iron and steel, widely supplied in the metal market. the rapid growth in demand for als is attributed to the attractive characteristics of these alloys [1]. aluminium alloy has been used in aircraft construction since the 1930s. as the industrial sector with the most use of al materials, the aerospace industry relies heavily on al types 2024 and 7075. considerations for using al 6061 are based on the properties of the alloy, namely low weight, good strength, formability, weldability, and durability. high corrosion resistance and low cost [2]. although generally more expensive than ferrous metal, the applicability of al has increased in use and has become competitive with ferrous alloys [3, 4]. various heat treatment studies on al 6061 were selected to improve the properties of the alloy's mechanical structure. heat treatment is a series of processes that involve heating and cooling metallic materials in their solid-state. the applied heat treatment aims to cause desired changes in the mechanical structure and the properties of the metal parts. by applying heat treatment, metals can be made tougher, stronger, and more resistant to impact. under the same conditions, heat treatment can also make metal materials softer and more elastic [5]. in particular, the effect of solution heat treatment and artificial ageing has been studied on the mechanical properties of al 7030 and 7108. the obtained results were the relationship between the decrease in the peak strength value of the mechanical properties of al 7030 and 7108 on the widening of the size distribution of the deposits. it is further explained that differences in the type, volume fraction, size and distribution of the deposited particles regulate the properties and changes that occur due to time and temperature, which is based on the initial state of the structure. the initial structure of a wrought material can vary depending on the structure of the material not undergoing crystallization to recrystallization so that these conditions, as well as the time and temperature of the precipitation heat treatment, affect the final structure and the resulting mechanical properties [6, 7, 8, 9]. the metastable precursor of the equilibrium phase 6061 is precipitated in a process involving one or more combinations of complex elements. chemical content, heat treatment parameters, and casting conditions significantly affect international journal of applied sciences and smart technologies volume 4, issue 1, pages 109–120 p-issn 2655-8564, e-issn 2685-9432 111 the extrusion ability and determine the combined microstructure. these results show that the properties of some al can be increased through a specific heat-treatment process. heat treatment can be done either by heating the solution or artificial ageing. in the process of heating the solution, the alloy is heated to a temperature range of 400°c to 530°c and then rapidly cooled in an aqueous medium to room temperature. especially for the al 6xxx group, the artificial ageing treatment is carried out at a temperature of 200 °c, while the average ageing hardening temperature is usually 160°c to 200°c [9, 10]. aluminium alloy subjected to a solution heating treatment is believed to have varying mechanical properties, affecting the machining capabilities to which the al is applied. some of the mechanical properties possessed by al 6061 are associated with the type of treatment, such as solution treatment, ageing time, and temperature applied to the alloy [11]. al 6061 has an intermetallic phase structure composed of si and generally alloys composed of almgsi elements and related alloys. maximum strength can be achieved by precipitation hardening, but the alloy's ductility is reduced. conversely, the ductility can be increased by using an annealing process, but this process causes the strength of the alloy to decrease [12, 13] into the artificial ageing treatment for 98 hours at 175°c. based on the results studied, the microstructural and mechanical properties of the specimens were not affected by the artificial ageing treatment applied as a result of the previous precipitation strengthening process [14, 15]. based on the research literature that has been conducted [16, 17], the t6 tempering process in alloy 6061 involves the formation of very thin deposits. the precipitate formed represents β'', which is oriented to the three sequences of deposits formed in the alloy's matrix. the size of the alloy matrix formed is on a nanometric scale and is coherent. from the considerations carried out, several studies provide further details showing the order in detail of the composition of the phases contained in the al-mg-si alloy. the general compositions for deposits formed on als are listed in table 1. the study presented includes experimental observations of tensile tests using a universal testing machine with 200 kn capacity and investigated al 6061-t4 to characterize tensile test properties under t6 temper ageing conditions. from the observations made in the test, it is possible to determine the distribution of the tensile international journal of applied sciences and smart technologies volume 4, issue 1, pages 109–120 p-issn 2655-8564, e-issn 2685-9432 112 properties experienced by the alloy. the initial condition is shown as a thermal loading process, namely the application of annealing or normalizing treatments, temper t4, and temper t6. the thermal loading process applied to alloy 6061 is limited to the solidstate of the alloy. therefore, the maximum temperature to be used is under the solidus temperature of 582°c. after the thermal loading process is applied, it is followed by observing the tensile test results through computerization. from the review of the cited literature, it was observed that there was no work on artificial ageing or the response to precipitation hardening of 6061 als applying t4 and t6 temper heat treatment by varying the time of artificial ageing, as well as their effect on the tensile behaviour of 6061-t4 als. therefore, the study focused on the variation of the ageing time of temper t6 on the tensile strength behaviour of 6061-t4. table 1. compositions of the precipitates contained in al-mg-si alloys. phase composition gp zone mg1si1 β" mg5si6 β′ mg9si5 β mg2si 2 research methodology 2.1 specimen preparation cylindrical specimens with a length of 100 mm and a specimen diameter of 22 mm are supplied in the form of extruded rods which are then subjected to conventional machining processes to reduce the specimen diameter to 17 mm. in the following process, the specimen is formed by the standard dimensions specified in the form of astm e8 for tensile strength testing. after going through various heat treatments given to each specimen, the specimen is then reconstructed through conventional machining. for the tensile test, the specimens formed by conventional machining for each variation amount to two specimens with four variations, so the total number of specimens provided is eight. several specimens of al 6061-t4 prepared for tensile strength testing are shown in figure 1 based on the astm e8 standard. based on the time variations in the artificial ageing treatment of temper t6 applied to al 6061-t4, the specimens are sequentially categorized into several groupings in table 2. international journal of applied sciences and smart technologies volume 4, issue 1, pages 109–120 p-issn 2655-8564, e-issn 2685-9432 113 figure 1. tensile test specimen table 2. categorization of specimens specimen category condition of specimen a base metal without treatment b sht 530°c for 2 h and aa 190°c for 3 h c sht 530°c for 2 h and aa 190°c for 5 h d sht 530°c for 2 h and aa 190°c for 7 h 2.2 heat treatment in metallurgical components, several processes affect the hardening mechanism of the mechanical structure of a material. the processes applied to al 6061 include the annealing or normalizing processes, temper t4, and temper t6. in the primary process, specimens were given normalizing heat treatment at a temperature of 430°c and held for 2 hours. then, the specimens were cooled slowly at room temperature. furthermore, the specimens were heated for 2 hours at 530°c to obtain a solid solution phase before being rapidly cooled in water media to room temperature. then in the following process, the specimens are left at room temperature for six days or 144 hours. this process is a series of t4 tempering treatment processes. the heat treatment of solid solution t6 and selected artificial ageing for al 6061 is the final process of hardening the alloy's structure. first, the alloy was heat-treated for 2 hours at a temperature of 530°c and then cooled in a water medium at room temperature before being heated again with a variation of 3, 5, and 7 hours at a temperature of 190°c. international journal of applied sciences and smart technologies volume 4, issue 1, pages 109–120 p-issn 2655-8564, e-issn 2685-9432 114 2.3 tension test the tensile test is one of the most commonly used mechanical stress-strain tests. the tensile testing process begins with fixing the specimen on the holding grip, then continuously and simultaneously measuring the applied load occurs instantly. the output of the tensile strength test is recorded as the relationship of the load or force applied to the cross-sectional area of the specimen surface. the specimens, with and without additional heat treatment, were tested under uniaxial loads with high static and load loading rates. tensile strength tests on specimens were determined using a universal testing machine with a capacity of 200 kn with a deformation rate of 0.01mm per second. 3 results and discussion 3.1. tensile test data based on the tensile test result, the mechanical properties of the 6061-t4 al specimen were obtained, shown in table 3. investigation of the effects of artificial ageing time variation applied to al 6061-t4 is reported in this analysis. the behaviour of the tensile properties is analyzed in a tensile test cycle, the maximum tensile strength and the limit of tensile strength before the material fracture, elastic strength, plastic character, and the relationship to the stress-strain cycle. it is shown visually in figure 2 that the maximum strength value of the highest tensile test on al 6061-t4 is obtained at the ageing time condition of 3 hours, and then sequentially followed by the ageing time conditions of 5 and 7 hours and finally, normal conditions. tabel 3. mechanical properties of the specimens aluminium alloy 6061-t4. specimen category a b c d ultimate tensile stress (mpa) 102.36 264.73 219.74 199.51 fracture stress (mpa) 66.11 204.10 162.34 111.07 yield stress (mpa) 48.164 217.19 177.67 175.31 modulus elasticity (mpa) 2305.76 3086.40 2003.60 2712.85 strain hardening exponent 0.19 0.031 0.088 0.048 international journal of applied sciences and smart technologies volume 4, issue 1, pages 109–120 p-issn 2655-8564, e-issn 2685-9432 115 figure 2. tensile test result with various aging time inversely proportional to the maximum tensile strength, the highest durability in al 6061-t4 starts under normal conditions and then continues with the ageing time conditions of 5, 3, and 7 hours. durability, or can be referred to as the mechanical resistance of a material, is a condition in which the material can maintain the structure of the atomic particles in the material matrix in a non-loading state until it reaches a weakening state and finally experiences final failure. the increase in change in the shape of the material structure is represented by the increase in the data points shown in figure 2. figure 3 shows that the highest maximum tensile strength value starts from the condition 3 hours ageing time of 264,73 mpa. then, the 5 hours ageing time condition international journal of applied sciences and smart technologies volume 4, issue 1, pages 109–120 p-issn 2655-8564, e-issn 2685-9432 116 is 219.74 mpa, followed by 7 hours ageing time of 199.51 mpa, and finally, the normal condition is 102.36 mpa. in addition to the maximum tensile strength value, figure 2 also shows the value of the endurance limit or mechanical resistance of al 6061-t4, which is shown as the value of fracture stress. based on the figure, the value of each fracture stress is shown as follows, starting from the normal condition of 66.11 mpa with the resulting stretch range reaching ± 0.35 mm/mm, then the 5-hour ageing time condition is 162.34 mpa with the resulting stretch range is ± 0.30 mm/mm, then the 3 hours ageing condition is 204.10 mpa with a range of ± 0.27 mm/mm stretch, and finally the 7 hours ageing time condition is 111.07 mpa with the resulting stretch range of ± 0.21 mm/mm. the results obtained in the analysis of figure 3 show that there are differences in data values between fracture stress and stretch range; the basis of the comparison between the two mechanical resistance parameters lies in the object seen from the behaviour of the 6061t4 al structure, namely the characteristics of ductile and brittle properties. specifically, based on the theory, the characteristics of ductile and brittle materials can be determined by comparing the value of the maximum tensile strength to the yield stress. furthermore, the comparison is limited by specific value criteria, which are factors of the ductility and brittle characteristics of the material. the resulting value of the comparison is also known as the stretch hardening ratio value. based on the theory, it is explained that if the stretch hardening ratio value is > 1.4, the alloy material will experience cyclically softening. at the same time, for the stretch hardening ratio value > 1.2, the alloy material will experience cyclic hardening. through table 4, the highest comparative value for the stretch hardening ratio is obtained under normal conditions. in order, the value of the stretch hardening ratio obtained is as follows, namely, under normal conditions, the yield hardening ratio value is 2.13, the condition for 5 hours of ageing is 1.24, the condition for 3 hours the ageing time is 1.22, and the ageing time condition 7 hours of 1.14. these results show that under normal conditions, the 6061-t4 experiences softening in a cycle, as evidenced by the stretch hardening ratio value more significant than the reference factor in theory, namely 1.4. this proves that the stretch hardening ratio value under normal conditions describes the tensile properties of al 6061-t4, which is international journal of applied sciences and smart technologies volume 4, issue 1, pages 109–120 p-issn 2655-8564, e-issn 2685-9432 117 cyclically softened. conversely, in the conditions of variation in artificial ageing, the structure of the al 6061-t4 material undergoes cyclical hardening, which is evidenced by the hardening ratio obtained in the range of reference factors. thus, the conditions for artificial ageing variations meet the established standards that the resulting value is between the range 1,2 to 1,4. furthermore, the calculations showed that the highest value achieved in the stretch hardening ratio values is produced at the ageing time condition of 5 hours. therefore, the ageing time condition is 3 hours, and the ageing time condition is 7 hours. from another point of view, it can also be seen that the stretch hardening exponential value is for the stretch hardening exponential value > 0.2. therefore, the alloy material will harden cyclically. in contrast, in the condition of the stretch hardening exponential value > 0.1, the alloy material will experience softening on a cycle basis. for average conditions, the stretch hardening exponent's value is equal to 0.19372 > 0.1 so that the material is softened cyclically. 4 conclusion al 6061 can experience an increase in strength by applying t4 and t6 temper treatment with ageing time variations of 3, 5, and 7 hours. a decrease follows an increase in the strength properties of the alloy in ductility. the highest strength value was achieved at 3 hours ageing time of 264.73 mpa with a reduction in ductility of 10% in terms of a decrease in the stretch value to normal conditions. from the results shown, it is also explained that each variation of ageing time that is applied successfully increases the strength of aluminum 6061-t4 alloy significantly while maintaining the ductility and elasticity of the alloy with a reduction in value that is not too far. investigations on the stretch hardening ratio value and the stretch hardening exponent proved that the increase in strength of the al 6061-t4 occurred due to the structure of the alloy matrix successfully hardening or strengthening with the formation of an intermetallic phase, namely si in the alloy matrix structure. international journal of applied sciences and smart technologies volume 4, issue 1, pages 109–120 p-issn 2655-8564, e-issn 2685-9432 118 references [1] c. f. tan, m. r. said, “effect of hardness test on precipitation hardening al 6061-t6”, chiang mai journal of science, 36(3), 276-286, 2009. [2] m.f.i.a. imam, dkk, “influence of heat treatment on fatigue and fracture behavior of aluminium alloy”, journal of engineering science and technology, 10(6), 730 – 742, school of engineering: taylor’s university, 2015. [3] j.e. hatch, ed, “aluminium: properties and physical metallurgy”, asm metals park, oh, 231-32, 1984 [4] lehmus d, and banhart j, “materials science and engineering a”, 349, 98, 2003 [5] j. ridwan, dkk, ”effect of heat treatment on microstructure and mechanical properties of 6061 aluminium alloy', journal of engineering and technology, 5(1), 2014 [6] fahrettin ozturk, “effects of aging parameters on formability of 6061-o alloy”, journal of materials and design, 31, 487-4852, 2010. [7] h.r. shahverdia, “effects of time and temperature on the creep forming of 7075 al springback and mechanical properties”, journal of material science and engineering, a 528, 8795-8799, 2011. [8] g.e. totten, c.e. bates, n.a. clinton, handbook of quenchants and quenching technology, asm international, 62, 140-144, 1993 [9] g.e. totten, howes, maurice a.h, steel heat treatment handbook, marcel dekker, inc, 1997. [10] d. maisonnette, m. suery, d. nelias a, p. chaudet, t. epicier, “effects of heat treatments on the microstructure and mechanical properties of a 6061 aluminium alloy”, materials science and engineering, elsevier, 528(6), 2718-2724, 2018 [11] h. demir, s. gunduz, “the effect of aging on machinability of 6061 aluminium alloy”, materials & design, 30(5), 1480-1483, 2009. [12] lehmus d, banhart j, and rodriguez-perez ma, “materials science and technology”, 18, 474, 2002. [13] s. kalpakjian dan s.r. schmid, “manufacturing engineering and technology”, sixth edition, prentice hall, new york, 2009 international journal of applied sciences and smart technologies volume 4, issue 1, pages 109–120 p-issn 2655-8564, e-issn 2685-9432 119 [14] g. mro ́wka-nowotnik, “influence of chemical composition variation and heat treatment on microstructure and mechanical properties of 6xxx alloys”, archives of materials science and engineering, 46(2), 98-107, 2010. [15] shatha m. raja, hassan a. abdulhadi, khairallah s. jabur, ghusoon r. mohammed, “aging time effects on the mechanical properties of al 6061-t6 alloy”, engineering, technology & applied science research, 8(4), 3113-3115, 2018. [16] n. r. prabhu swamy, c. s. ramesh, t. chandrashekar, “effect of heat treatment on strength and abrasive wear behavior of al6061-sic composites”, bulletin of materials science, 33(1), 49-54, 2010. [17] k. matsuda, s. ikeno, t. sato, “hrtem study of nano-precipitation phases in 6000 series als”, science, technology and education of microscopy: an overview, 152–162, 2003. international journal of applied sciences and smart technologies volume 4, issue 1, pages 109–120 p-issn 2655-8564, e-issn 2685-9432 120 this page intentionally left blank international journal of applied sciences and smart technologies international journal of applied sciences and smart technologies volume 4, issue 2, pages 159–172 p-issn 2655-8564, e-issn 2685-9432 159 this work is licensed under a creative commons attribution 4.0 international license appropriate technology-based project-based learning: 3d printing utilization for learning media class monica cahyaning ratri1* chemistry education study program, faculty of teacher training and education, sanata dharma university, yogyakarta, indonesia *corresponding author: monicacahyaningr@usd.ac.id (received 01-10-2022; revised 04-11-2022; accepted 07-11-2022) abstract indonesia has more than ten thousand islands, making it an archipelago; therefore, the distribution of learning media and facilities over the country is difficult. this condition may affect the learning material delivery and the student's understanding. thus, the do-it-yourself (diy) learning media helps to overcome this condition. we investigated the usefulness of applying appropriate technology through the 3d printing technique to fabricate learning media inspired by local wisdom to meet the needs of learning media. the students were guided by a project-based learning (pjbl) method to find the problem, design their learning media, fabricate it using a 3d printer, and expected to be applied to deliver the teaching material in the class. this study aimed to observe the implementation of the designing process based on theory to reality. a qualitative method and questionnaire were used to measure student's responses. we found that pjbl learning based on a 3d printing project increases students' motivation through creativity and effectiveness in delivering the teaching materials. http://creativecommons.org/licenses/by/4.0/ international journal of applied sciences and smart technologies volume 4, issue 2, pages 159–172 p-issn 2655-8564, e-issn 2685-9432 160 furthermore, 3d printed-based learning media can easily be fabricated, producing less waste. keywords: actuators and sensors, tcp socket, object detection 1 introduction indonesia is a country with more than ten thousand islands, well known as one of the biggest archipelagos. not only is the island huge, but the population of more than 275 million people also vary with various background and ethnic group, making it so diverse. regional autonomy was chosen for the government system to maximize regional potential. several problems arise due to those conditions in the education system and facilities.[1, 2] as the number of islands and the population wider, it will not be easy to even out the education facilities over the country. education is one of the crucial aspects of achieving a better society.[3] however, that stage remains challenging with the various ethnic backgrounds in indonesia. the limited access to education facilities, the shortage of capable teachers, problems in the curriculum, and a high tuition fee to afford for most indonesian are the challenging problems that need to be solved.[4-6] the solution should be found at the moment to solve that recent problem. thus, education goals in indonesia are soon to be achieved. chemistry is a mandatory subject that has to be taken during high school in indonesia. thus, the understanding of chemistry subject is one of the essential capabilities, even though chemistry has many abstract concepts.[7, 8] the structure of the compound is an example of the abstract phenomena in chemistry which has to be implemented in real life.[9] therefore, the strategy to give a deep understanding of this matter is crucial. moreover, chemistry subject also requires the student to recognize and capture the abstract molecule model of the compound. hence, unlimited visualizing of abstract models is needed.[10] in chemistry, the boundary in the visualizing ability of students on abstract phenomena could be helped with the 3d imaging software using the computer or adjustable do-it-yourself (diy) media. international journal of applied sciences and smart technologies volume 4, issue 2, pages 159–172 p-issn 2655-8564, e-issn 2685-9432 161 the likelihood of media visualizing the abstract concept in chemistry is preferable in delivering teaching material to achieve a deep understanding. learning media plays a significant role in making abstract matter more real in teaching and learning, particularly. the effectiveness and efficiency of achieving learning objectives are increased with the involvement of learning media.[11] chemistry is not a subject that is easy to be explained only verbally; however, the visuals also will help in the explanation of the new and novel ideas in chemistry. [12] visual media help student to make the idea more accessible than the textbook material.[13] based on those understanding, a recent learning media which can help students to increase the reality of the concept by involving the local wisdom to prevent the problem of the needed material and that easily made is necessary this recent time. thus, the learning media not only help the student to understand more about the abstract concept, moreover help the teacher to visualize the theoretical ideas. the problem in transportation to connect one island to another in indonesia affects the mobility of school facilities such as learning media. therefore, a new strategy, onsite fabrication learning media by utilizing local wisdom and appropriate technology, is required. appropriate technology emerges as the new end-use product based on society's need with the integrated system.[14] the need for a product from the area is gathered into one data and based on that, a new design to fulfill the need is made. the fabrication of the designed product follows this, and the product is sent and able to be used by people, in this case, students.[15] 3d printing is the implication of appropriate technology able to meet the needs of learning media based on local wisdom. 3d printing is a process of fabrication of 3d design of the digital model and can be processed by various 3d printer type, layer by layer to make 3d object by joined and solidified with a computer.[16] the 3d printing technique allows the fabrication of complex designs easily with less waste. in order to figure out the solution to the problem mentioned above, we designed a project-based learning (pjbl) activity in the learning media class. pjbl allows the student to look for the solution to the problem they face and design a plan to solve it.[17] thus, the students could creatively create a new learning media based on local wisdom and the subject's needs using 3d printing technology. we investigate the international journal of applied sciences and smart technologies volume 4, issue 2, pages 159–172 p-issn 2655-8564, e-issn 2685-9432 162 effectiveness of the application of the 3d printing technique in the learning media class on the fabrication of learning media. the students' motivation and response were measured with the questionnaire through the google form. the students' response mentioned that learning media fabrication through pjbl with 3d printing was easy to design, fabricate, and maintain, and they thought that this learning media helps deliver teaching material in chemistry subject. 2 research methodology the sample was one class of students in the learning media class, chemistry education study program, sanata dharma university, yogyakarta. we used pjbl-based research with six steps approach.[18] we took 5 weeks to conduct this project. during the first meeting, students were introduced to the 3d printing technique, the basic information about a cartesian 3d printer used for this project, and the basic information about the 3d design software. then, we applied the pjbl learning method in this class and gave the students project to be solved by utilizing the 3d printer to make diy learning media. students were allowed to identify the problem in learning material delivery that could be solved with the learning media. the students were encouraged to find the problem in delivering the teaching material in high school chemistry subject as the first step. the second step was making a plan to solve the problem they found. the third step was scheduling the project, followed by students' weekly reports. then, the student presented their results in the student exhibition, and the final step was reflections. the project started with the fabrication of the learning media based on students' findings. the first fabrication step was designing the learning media inspired by local wisdom into 3d digital design using tinkercad. then it was followed by a printing process using the 3d printer ender 3 pro with polylactic acid (pla) filament as material. weekly reports were mandatory to be handed to the teacher by students to control the students' progress, and discussion was opened during this time. after all, were done, the student had to present their result in an exhibition and explain the function and purposes of their learning media. in the end, we did the class reflection to reflect on our findings after we finished the project. international journal of applied sciences and smart technologies volume 4, issue 2, pages 159–172 p-issn 2655-8564, e-issn 2685-9432 163 the effectiveness of this project was measured by a questionnaire, including the student motivation, the easiness of the fabrication, and their response to the potential application. the questionnaire was given to the student through google form after all the processes were completed; thus, the student's response was based on their experience. 3 results and discussion appropriate technology named with 3d printing technique was chosen for diy learning media fabrication due to the simple instrument and its fabrication effectivity. moreover, a 3d printer which is a relatively small instrument, only needs pla filament as the material and is easy to travel to many places, even rural areas. indonesia is an archipelago with a transportation problem to reach the rural area; however, it has prodigious local wisdom that inspires diy learning media fabrication. in reference to those reasons, we conducted the study by applying the pjbl method to measure the effect of the application of appropriate technology on student motivation and the effectiveness of teaching delivery of chemistry subjects. the research steps are presented in figure 1, and the step started with a discussion on finding the problem. after students find the problem in delivering chemistry teaching material, then students make a plan to solve those problems. the solution involved the 3d printing in creating the new learning media. the scheduling project is the next step to continue the plans. after starting the fabrication step using 3d printing, students have to report their progress weekly to control the project's progress. the next step is after students complete their project; they present their results in the student exhibition (supplementary figure 1), followed by students' reflections. the reflection was then conducted to complete the step in this cycle and to, recall their experience during the project and share their motivation and the effectiveness of the 3d printing-based learning media. in the last step, which was the reflection, the questionnaire was included on it. there were two parts of the question in the questionnaire about fabrication and the potential application of 3d printed learning media. the first part was about the students' international journal of applied sciences and smart technologies volume 4, issue 2, pages 159–172 p-issn 2655-8564, e-issn 2685-9432 164 responses on the fabrication, and the student's responses on the application were in the second part. the questions in every part are listed below: 1. fabrication of learning media based on 3d printing is easy to design 2. fabrication of learning media based on 3d printing trigger the creativity 3. fabrication of learning media based on 3d printing is easy to fabricate 4. fabrication of learning media based on 3d printing is easy to handle the waste 5. fabrication of learning media based on 3d printing is easy to maintain 6. application of learning media is maintainable (easy to maintain) 7. application of learning media is usability (easy to operate and use) 8. application of learning media is reusable 9. application of learning media is effective and efficient 10. application of learning media is the easy to delivered teaching material students' feedback was then collected and analyzed. figure 1. the experimental step on applying pjbl-based research with the six-step approach to measure the effectiveness of applying appropriate technology on the student's motivation and teaching material delivery. international journal of applied sciences and smart technologies volume 4, issue 2, pages 159–172 p-issn 2655-8564, e-issn 2685-9432 165 figure 2. the printing process and 3d printed product for learning media. a and b are the printing process using the cartesian 3d printer for learning media products. c. 3d printed resemble electrophoresis instruments and molymod. d. 3d printed puzzle ionic bond and e. 3d printed embossed-periodic system of elements with specific colors based on their properties. the pjbl cycle was conducted for 5 weeks. students were given a chance to finish their projects and report their progress weekly on the learning management system (lms) site. the report was needed to control the progress of students, and the teacher was able to discuss and give suggestions if needed. figure 2 a and b show the fabrication process of learning media using a 3d printing technique that made a 3d learning media resemble electrophoresis instruments layer by layer using a cartesian 3d printer. the finished 3d printed learning media are shown in the figure 2 c, d and f. in the figure 2c, a resembling 3d electrophoresis instrument and molymod can be seen. a international journal of applied sciences and smart technologies volume 4, issue 2, pages 159–172 p-issn 2655-8564, e-issn 2685-9432 166 resembling 3d electrophoresis instrument is useful for explaining the electrophoresis process to students. molymod also helps the teacher to explain the abstract concept of the molecular structure of the molecule to students. a 3d shape and visual presentation allow the students to construct an abstract concept to be more realistic; therefore, students’ understanding can be enhanced.[19] the 3d-printed puzzle ionic bond is shown in figure 2d. the puzzle is a traditional game in indonesia; inspired by this local wisdom, the puzzle is aimed at giving the student another alternative in the learning process. the puzzle gaming method in delivering ionic bond topic potential to help students to heighten their motivation and learning result. [20] the 3d-printed embossed-periodic table of elements is shown the figure 2e. the 3d embossed-periodic table of elements with specific colors based on their properties aimed to strengthen students' understanding of the difference of elements' properties visually. the fabrication of learning media was governed by appropriate technology using the 3d printing technique. in the application of appropriate technology, we have to consider several things, such as the effectiveness of the technique in fulfilling the need, the easiness of fabrication, and also waste handling. thus, the student's opinion of this technology should be known by surveying through the questionnaire. the student's response to this fabrication technique is presented in figure 3. based on the survey, students said the 3d printer was easy to maintain. therefore, the 3d printer is usable in many places, even with minimal facilities, as long as electricity is provided. pla filament was used as the material due to its affordability; pla is a degradable biopolymer that is harmless to the environment.[21] based on the survey, students said that the waste of this fabrication was easy to handle due to its biodegradability. 80% of students said the fabrication step was easy, and the rest said that it was difficult. this finding explains that student has a different level of motivation. thus, the responses were different. all students agreed that this project motivated them by triggering their creativity. students were obligated to solve the problems by creating learning media based on 3d printing inspired by local wisdom. furthermore, a 3d electronic design was required for working with a 3d printer; therefore, students must learn how to make a 3d design based on their needs. even though 10% of students agree that designing 3d objects was not easy, most students agree that the 3d design was easy to make. international journal of applied sciences and smart technologies volume 4, issue 2, pages 159–172 p-issn 2655-8564, e-issn 2685-9432 167 figure 3. summary of students' responses on the fabrication of learning media based on 3d printing technique. students' responses were based on their motivation, which was expressed by their creativity and willingness to learn new techniques and designs. a survey measured the effectiveness of the application of 3d printed learning media. the sample was students of a learning media class. five indicators were used as representatives of the success of the learning media application. the indicators were: 1. learning media is maintainable (easy to maintain); 2. learning media usability (easy to operate and use); 3. learning media is reusable; 4. learning media is effective and efficient in delivering the teaching material; 5. learning media is easy to deliver teaching material. student response on the application of learning media was great. the students were agree that 3d printed based learning media is easy to maintain during the delivering of the teaching material. the application of learning media was usefull for the students. that is importance because of the utilization of learning media able to maintain students interest, increase their analytical skill and enhance student attention.[19] recently, due to the environmental problem and increased awareness of environmental sustainability, the fabricated learning media should be reusable. the survey shows that this 3d-printed learning media can potentially be used several times. because chemistry is one of the subjects where most of the topics are abstract, the real international journal of applied sciences and smart technologies volume 4, issue 2, pages 159–172 p-issn 2655-8564, e-issn 2685-9432 168 learning media for chemistry subjects is preferred. the learning media ought to be visually accessible and present abstract concepts in reality.[12, 13] the students agreed that the learning media's application fulfilled that necessity. based on the survey, students said that the learning media is effective and efficient in delivering the abstract concept in reality and easy to explain the abstract concept to be understandable. therefore, this pjbl project’s result shows us the potential of diy learning media fabrication to fulfill the needs of local wisdom-inspired learning media. figure 4. summary of students' responses on the application of learning media based on 3d printing technique. students' responses were based on the questionnaire, including the easiness of use, reusability, and the effectiveness of delivery of teaching material 4 conclusion the result of this study can be concluded that the pjbl-based 3d printed learning media fabrication enhanced students' motivation, easy to handle, operate and use. 3d printed learning media is also helpful and effective in delivering learning material. the material used for the filament is affordable and biodegradable, thus, harmless to the international journal of applied sciences and smart technologies volume 4, issue 2, pages 159–172 p-issn 2655-8564, e-issn 2685-9432 169 environment. the easiness of travel the 3d printer make this 3d printing technique potentially used in many places, even in rural area. acknowledgment the author wants to acknowledge the very generous contributions of mr. johnsen harta and all the students in the learning media class, chemistry education study program, sanata dharma university. national research foundation of korea for providing the 3d printer instrument for this research. references [1] l. hakim, “pemerataan akses pendidikan bagi rakyat sesuai dengan amanat undang-undang nomor 20 tahun 2003 tentang sistem pendidikan nasional”, edutech: jurnal ilmu pendidikan dan ilmu sosial, 2(1), 2016. [2] r. niswaty, m. nasrullah, and h. nasaruddin, “pelayanan publik dasar bidang pendidikan tentang sarana dan prasana di kecamatan pulau sembilan kabupaten sinjai”, in seminar nasional lp2m unm, 1(1), 2019. [3] c. hopkins and r. mckeown, “education for sustainable development: an international perspective”, education and sustainability: responding to the global challenge, 13, 13-24, 2002. [4] a. b. santosa, “potret pendidikan di tahun pandemi: dampak covid-19 terhadap disparitas pendidikan di indonesia”, csis commentaries, 1-5, 2020. [5] d. hairi, “respon pemuda perbatasan dalam menghadapi keterbatasan fasilitas pendidikan pada pulau combol desa selat mie kecamatan moro kabupaten karimun”, universitas maritim raja ali haji. [6] i. d. p. subamia, “analisis kebutuhan tata kelola tata laksana laboratorium ipa smp di kabupaten buleleng”, jpi (jurnal pendidikan indonesia), 3(2), 2015. [7] a. o'dwyer and p. e. childs, “who says organic chemistry is difficult? exploring perspectives and perceptions”, eurasia journal of mathematics, science and technology education, 13(7), 3599-3620, 2017. [8] g. sirhan, “learning difficulties in chemistry: an overview”, 2007. international journal of applied sciences and smart technologies volume 4, issue 2, pages 159–172 p-issn 2655-8564, e-issn 2685-9432 170 [9] h. k. wu, j. s. krajcik, and e. soloway, “promoting understanding of chemical representations: students' use of a visualization tool in the classroom”, journal of research in science teaching: the official journal of the national association for research in science teaching, 38(7), 821-842, 2001. [10] t. a. holme, “can we envision a role for imagination in chemistry learning?”, journal of chemical education, 98(12), 3615-3616, 2021. [11] y. d. puspitarini and m. hanif, “using learning media to increase learning motivation in elementary school”, anatolian journal of education, 4(2), 53-60, 2019. [12] g. salomon, “media and symbol systems as related to cognition and learning", journal of educational psychology, 71(2), 131, 1979. [13] p. s. cowen, “film and text: order effects in recall and social inferences”, ectj, 32(3), 131-144, 1984. [14] c. brivio, “off main grid pv systems: appropriate sizing methodologies in developing countries”, 2014. [15] m. jiménez, l. romero, i. a. domínguez, m. d. m. espinosa, and m. domínguez, “additive manufacturing technologies: an overview about 3d printing methods and future prospects”, complexity, 2019. [16] m. c. ratri, a. i. brilian, a. setiawati, h. t. nguyen, v. soum, and k. shin, “recent advances in regenerative tissue fabrication: tools, materials, and microenvironment in hierarchical aspects”, advanced nanobiomed research, 1(5), p. 2000088, 2021. [17] j. choi, j.-h. lee, and b. kim, “how does learner-centered education affect teacher self-efficacy? the case of project-based learning in korea”, teaching and teacher education, 85, 45-57, 2019. [18] n. h. fiktoyana, i., s. arsa, p., adiarta, a., “penerapan model project based learning untuk meningkatkan hasil belajar dasar dan pengukuran listrik siswa kelas x-tiptl 3, smkn 3 singaraja”, jurnal pendidikan teknik elektro undiksha, 7(3), 90-101, 2018. international journal of applied sciences and smart technologies volume 4, issue 2, pages 159–172 p-issn 2655-8564, e-issn 2685-9432 171 [19] i. n. h. fiktoyana, p. s. arsa, and a. adiarta, “enhancing student interest in english language via multimedia presentation”, international journal of applied research, 2, 275-281, 2016. [20] s. y. cheung and k. y. ng, “application of the educational game to enhance student learning, (in english)”, frontiers in education, original research 6, 31 march 2021. [21] h. tsuji and s. miyauchi, “enzymatic hydrolysis of poly(lactide)s:  effects of molecular weight, l-lactide content, and enantiomeric and diastereoisomeric polymer blending”, biomacromolecules, 2(2), 597-604, 2001. international journal of applied sciences and smart technologies volume 4, issue 2, pages 159–172 p-issn 2655-8564, e-issn 2685-9432 172 this page intentionally left blank international journal of applied sciences and smart technologies international journal of applied sciences and smart technologies volume 5, issue 1, pages 1–16 p-issn 2655-8564, e-issn 2685-9432 this work is licensed under a creative commons attribution 4.0 international license 1 forensic investigation in sql server database using temporal tables & extended events artifacts shadi k. a. zakarneh1,* 1m.sc. student, palestine technical universitykadoorei, tulkarem, palestine, *corresponding author: zakarnehshadi@gmail.com (received 21-09-2022; revised 26-09-2022; accepted 22-11-2022) abstract different database management systems (dbms) were developed and introduced to store and manipulate data. microsoft sql (mssql) server one of the most popular relational dbms used for large databases. with the increasing use of databases, intentional and unintentional accidents on databases are increasing dramatically. therefore, there is a great need to develop database forensic investigation (dbfi) tools and models. the temporal table is a new feature introduced with mssql server 2012 for track changes, database audit, data loss protection, and data recovery. in addition, the extended events another new feature introduced with mssql server 2008 for database performance troubleshooting. this study focused on dbfi in the mssql server using temporal tables and extended events artifacts. the experiment is conducted and the results have presented the use of the temporal tables and extended events artifacts in analyzing and determining the internal unauthorized modification on the database. keywords: sql server, database forensic, forensic investigation, database management system. 1 introduction mssql server is a relational dbms. microsoft in 1998 developed the first version of the mssql server, which is version 7.0. microsoft continues in developing newer versions in the mssql server until the last version today which is the mssql server 2019 [1]. in the mssql server, there are two main types of databases, system databases that are used for sql server operations. the second type is the user’s databases, which are http://creativecommons.org/licenses/by/4.0/ mailto:zakarnehshadi@gmail.com international journal of applied sciences and smart technologies volume 5, issue 1, pages 1–16 p-issn 2655-8564, e-issn 2685-9432 2 created by the users for their needs and business [1]. each database in mssql server mainly consists of two types of files. a transaction log file that is stores information about the transactions executed on the database. each time the data modified, the transaction log information is used to undo or redo the changes, so the transaction log also used in the database recovery after the failure occurred [2]. the other file is the data file, which is used to store the data [2]. mssql server consists of several services, which are database engine, agent, mssql server browser, and mssql server full-text search. also, the mssql server provided other services called business intelligence services which are used for database analysis, these services are mssql server integration service, mssql server reporting service, and mssql server analysis service [3]. since the stable and reliable mssql server 2019 is an improved version of the previous versions, the new version includes all the previous version's features in addition to a set of new features related to performance, security, availability, and big data [4]. the mssql server 2019 new features include intelligent query processing, which improves the query optimizer behavior, as a result, the performance will improve [4]. accelerated database recovery (adr) is another new feature, by adding this feature; the time required for the database recovery process has been greatly reduced [1]. alwaysencrypted with secure enclaves feature, by this feature, administrators are not allowed to access the decryption keys, while the transparent column encryption is allowed [1]. this feature enables mssql server to encrypt part of the memory to perform computations on encrypted fields without exposing the unencrypted values to the rest of the operations [1]. to achieve this goal, secure enclave technology is used [5]. the other feature is memory-optimized tempdb metadata, in this feature; improvements have been made to the tempdb code by improving access to ram so that metadata can rely completely on memory. thus, the so-called bottleneck problem that large data is exposed to when using a large amount of tempdb is bypassed [5]. in recent decades, the use of databases has increased dramatically. the computerization of businesses and services and the use of applications in the daily lives of individuals have greatly increased the need for databases [6]. with this expansion in the use of databases, the need to improve security and privacy protection means increases international journal of applied sciences and smart technologies volume 5, issue 1, pages 1–16 p-issn 2655-8564, e-issn 2685-9432 3 dramatically, especially with the increase in security problems and security incidents that affect data confidentiality and users' privacy [6]. because of the increase in security incidents on databases, there has become a great need for dbfi to identify digital evidence and perpetrators and improve information security [6]. mssql server one of the top ten relational dbms due to the database engines ranking due to the popularity in 2020 as shown in fig. 1 [7]. figure 1. database ranking score in dec 2020 [7] a new feature introduced in the mssql server named a temporal table. this feature was introduced with ansi sql 2011 [8]. temporal table started as a new feature with the mssql server 2016. temporal tables are introduced for track changes, audit purposes, data loss protection, and data recovery in case of intentional or unintentional changes [9]. another new feature that was developed in the mssql server to help database developers troubleshoot performance issues during and after the development of the databases is the extended events feature. this feature was first launched with mssql server 2008 and then it was improved in mssql server 2012 [10]. with the high popularity and wide use of the mssql server. therefore, dbfi in the mssql server is highly important to determine the artifacts that can be extracted from the mssql server's new versions with its new features, especially in security features. international journal of applied sciences and smart technologies volume 5, issue 1, pages 1–16 p-issn 2655-8564, e-issn 2685-9432 4 this study will focus on the dbfi in mssql server 2019 to determine the new artifacts according to the temporal table and extended events features. 2 literature review database forensics is one of the branches of digital forensics [11]. dbfi is based on examining and retrieving the contents of databases and analyzing metadata to identify digital evidence related to incidents that the databases are exposed to [12]. because of the wide use and spread of digital data and the heavy reliance on databases, dbfi has become necessary to investigate accidents affecting databases [11]. as well as in other cybercrimes, where the digital forensic investigation in many cases includes extracting digital evidence from the databases of systems and applications related to the committed cybercrime [11]. to perform dbfi, many tools are used to extract databases and their metadata for analysis and investigation [13]. some of the features of these tools are the ability to clone a hard drive, compare files, and encrypt them. these tools also work to recover deleted or damaged data [14]. in addition to the ability to recover lost or deleted database components such as tables, views, keys, and stored procedures [13]. while the dbfi includes the identification, collection, preservation, reconstruction, analysis, and reporting of the investigation results and findings. however, the multiplicity and diversity of dbms such as oracle, mssql server, and postgresql, etc. made it difficult to have a specific model for dbfi. several dbfi models were designed based on specific accidents that some types of databases were exposed to [11]. where the dbfi aims to find the digital evidence in the database, different forensic investigation models were conducted [15]. some of these models analyzed transactions and journal logs. while other models worked to recover deleted data, among these, the models that rely on transaction logs to recover data. while other models depend on the analysis of the database engine to bypass the problem of deleting records for the antiforensic, overwriting, or changing them every period [15]. the engine-based method for data recovery is based on raw-level data analysis. this method is often used in small dbms such as sqlite, where the internal structure of international journal of applied sciences and smart technologies volume 5, issue 1, pages 1–16 p-issn 2655-8564, e-issn 2685-9432 5 the databases must be understood for the investigator to use this method [15]. thus, to use the forensic investigation method based on database engine analysis on large dbms such as mssql server, it is necessary to understand the internal structure of the database engine and its storage [15]. in mssql server dbfi different artifacts can be collected and analyzed such as transaction logs files, data files, sql server logs, database schemas [15]. while the log file can be deleted or modified, so the investigator may need to analyze the data file. to analyze the data file, the investigator needs to understand how is the mssql server storage engine stores the raw data and the internal structure of the data file [11]. the data file in mssql server consists of a set of pages. a page consists of a header, a data row, and a row offset array as shown in fig. 2. the page size is 8192 bytes, of which 96 bytes are reserved for the header. the page metadata is stored in the first 64 bytes of the header and the rest of the header space is filled with 0x00. while record data in the tables stored in the data row, if the record size more than 8060 bytes the sql server stores the record in multiple pages. the record location in the page is stored in the row offset array [15]. while dbms record data changes in transaction logs and record information about transactions in audit logs such as what changed, when the changes were made, and who made the change. however, this data may not be sufficient for some systems such as financial systems that sometimes need to access a snapshot of the data at a certain time [2]. to solve this problem, temporal tables are defined by ansi sql 2011 to meet these requirements. temporal tables are designed to keep a complete record of data changes and are easy to analyze on time. there are two types of temporal tables. the first type is system-versioned temporal tables, these tables keep a history of data changes based on the time changes occurred in the system [3]. thus, system-versioned temporal tables provide a snapshot of the data that was on the system at a specific time. the second type is application-versioned temporal tables that provide data that is valid from a business point of view [3]. international journal of applied sciences and smart technologies volume 5, issue 1, pages 1–16 p-issn 2655-8564, e-issn 2685-9432 6 figure 2. page structure [15] extended events feature was launched in mssql server to collect as much data as needed to help database administrators or developers troubleshoot and identify database performance problems [16]. this feature was released with mssql server 2008 and later versions. in mssql server 2008 this feature was introduced without a gui, so developers had to write large and complex queries to get the required data [5]. this feature has been further developed in mssql server 2012 and later to enable the user to set it up through the gui and to give greater choices of data that can be collected to identify performance problems and to be used for troubleshooting by developers [5]. the extended events performance monitoring system is lightweight so it uses minimal performance resources. extended events sessions can be created or modified, and the collected data is displayed and analyzed through the gui provided by the mssql server management studio. the developer can choose the data he wants to collect and thus display it on the live monitoring screen, as well as store it in files or a table in a database for follow-up and analysis at any time [16]. many previous studies were researched in the dbfi field. a study entitled: development and validation of a database forensic metamodel, this study aimed to present a model for dbfi called database forensic metamodel [17]. where the study international journal of applied sciences and smart technologies volume 5, issue 1, pages 1–16 p-issn 2655-8564, e-issn 2685-9432 7 analyzed a set of models used in dbfi to reach this model, which consists of four phases as identification, artifacts collection, artifacts analysis, and documentation and presentation [17]. another study entitled: detecting database file tampering through page carving, the study focused on presenting a component that detects modifications in the database file, this component relies on forensic investigation to identify discrepancies between indexes and tables in the database [14]. a study entitled: duel security-detection of database modification attack and restore facility from unauthorized access, this study proposed a model for dual security to identify and prevent attacks on the database by monitoring web and database requests [18]. in the proposed model the modified data can be restored using the md5 algorithm [18]. while the previous studies focused on reviewing dbfi models and presenting a proposed model. other studies focused on data recovery using dbfi techniques and tools by collecting and analyzing the log files. while other studies focused on analyzing the data files to recover the deleted or tampered data. this study aims to analyze and detect internal unauthorized modifications on the mssql server database using temporal tables and extended events artifacts. in addition, the modified data will be recovered using the temporal table. the study used a dbfi model consist of four phases (identification, artifacts collection, artifacts analysis, and documentation and presentation). 3 research methodology two basic steps to complete this study. the first step is to collect data and information from literature studies, as a literature study is a method of collecting data through reading books, thesis, journals, and other related resources. the second step is to design a scenario to implement the dbfi in the mssql server database. through the designed scenario, the dbfi model is followed in this study is clarified. in this study, mssql server 2019 is installed, and a proposed database is designed, implemented, and prepared for the dbfi model phases to be implemented to analyze and determine any internal unauthorized modification that occurred in the database. international journal of applied sciences and smart technologies volume 5, issue 1, pages 1–16 p-issn 2655-8564, e-issn 2685-9432 8 in the simulation process, the used dbfi model consists of four stages: identification, artifacts collection, artifacts analysis, and documentation and presentation as shown in fig. 3 [11]. all stages were taken to obtain valid and admissible evidence. figure 3. database forensic investigation model at the identification stage, the incident type and nature of the target databases are understood. the techniques and means needed in dbfi are identified, the forensic environment is prepared. the database server is isolated from the production environment and networks [11]. in the process of artifacts collection, all relevant data in the compromised database are collected from the database server. the collected data will be analyzed, and then the evidence will be extracted and determined. all the conducted processes during the three stages are documented including who, where, and when conducted. all the findings and data related to the evidence are documented in a timely manner [11]. 4 experiment the experiment was conducted by installing mssql server 2019 standard version on a workstation with a windows 10 pro operating system. the experiment database was built on the mssql server. a temporal table for customer’s information is built in the database with its historical table as shown in fig. 4. identification artificats collection artifacts analysis documentation and presentation international journal of applied sciences and smart technologies volume 5, issue 1, pages 1–16 p-issn 2655-8564, e-issn 2685-9432 9 figure 4. create a customer temporal table with its historical the extended events session created and the data that need to be collected are identified. the created session is configured to be saved to a file automatically as shown in fig. 5. after the extended events session started, a number of rows were inserted into the customer table. sql quires executed to show the changes in temporal and historical tables as shown in fig. 6. the results of the sql query shown in fig. 7 and fig. 8. then using another client computer, many rows were inserted, modified, and deleted. after these transactions on the table, the temporal and historical tables data are viewed to show the impact of the insert, update, and delete transactions on the historical table. the extended events session is viewed and the events reviewed to check the data are collected during the execution of the transaction on the table. international journal of applied sciences and smart technologies volume 5, issue 1, pages 1–16 p-issn 2655-8564, e-issn 2685-9432 10 figure 5. extended events session properties figure 6. sql queries to retrieve data from the temporal and historical tables international journal of applied sciences and smart technologies volume 5, issue 1, pages 1–16 p-issn 2655-8564, e-issn 2685-9432 11 figure 7. retrieved data from temporal table figure 8. retrieved data from historical table according to a suspected unauthorized modification on the database, a comparison between the current data in the customer temporal table and its historical table is conducted. by comparison, the historical data and current data are viewed and rows dates, and times are collected. the extended events collected data are reviewed due to the collected date and time from the historical table. from the extended events collected data, the executed transaction is determined with its date and time, the number of affected rows, the user who has performed the modification, the client machine, and executed the sql statement text, etc. 5 results and discussion the results of the study that was conducted successfully in collecting the temporal and historical table’s data and extended events logs artifacts to determine the unauthorized modification evidence. according to a suspected unauthorized modification on the database, the dbfi model was used to collect the artifacts and determine the evidence. at the identification stage, the mssql server version and the tampered database is determined, the accident information is identified, and the database server is isolated. the tampered database and international journal of applied sciences and smart technologies volume 5, issue 1, pages 1–16 p-issn 2655-8564, e-issn 2685-9432 12 the extended events files were collected and transferred to the database forensic workstation in the artifacts collection stage. at the artifacts analysis stage, the database is attached to the mssql server in the database forensic workstation. the current data from the customer temporal table are viewed using a sql select statement, and the historical table data are viewed using a sql select statement with a system time period condition to return the historical data as shown in figs. 9 11. figure 9. sql queries to retrieve data from the temporal and historical tables figure 10. retrieved data from temporal table figure 11. retrieved data from historical table international journal of applied sciences and smart technologies volume 5, issue 1, pages 1–16 p-issn 2655-8564, e-issn 2685-9432 13 by comparing the temporal table with the historical table, the investigator found the differences between the data in the two tables. the first difference was due to the custid field with the value 3 repeated in the historical table with different custname field values and different validfrom and validto fields values, which means there is a modification executed on this row. in addition, the second difference was the custid field value 4 found in the historical table and not presented in the temporal table which means the row was deleted. according to the determined modifications and deletion that were presented by viewing the temporal and historical data, the extended events session file was analyzed due to the system dates from the historical table. in the analysis stage, the sql statement completed events determined and viewed. the collected events details include the client application name, the client hostname, the affected database name, number of affected rows, the database server instance name, the user name who modified the data, the used database user name, and the executed sql statement text as shown in fig. 12. as shown in fig. 12, the modification happened from client host named “shadipc”, the user name that do the modification was “pacc\shzakarneh”, and the affected database name was “test”. while the temporal table store the current data, the historical table store the historical data includes the current rows, deleted rows, and the old and modified rows in addition to the two system times columns that are indicated to the created date and time of the rows. by comparing the two tables, any suspected modification will be determined. table in addition, by linkage the date and time column from the historical data with a timestamp in the extended events session, the evidence will be extracted and determined from the details of the event. international journal of applied sciences and smart technologies volume 5, issue 1, pages 1–16 p-issn 2655-8564, e-issn 2685-9432 14 figure 12. extended events session details 6 conclusion the database is used to store data for businesses and individuals. mssql server one of the most popular dbmss in the world used for small and large data. with the increased use of the database, the dbfi becoming more important to determine the evidence and data recovery. different techniques and models were used in the dbfi using international journal of applied sciences and smart technologies volume 5, issue 1, pages 1–16 p-issn 2655-8564, e-issn 2685-9432 15 transaction log files and data file recovery. this study focused on dbfi using temporal tables and extended events features in mssql server new version as new artifacts. the study experiment was conducted using a dbfi model in mssql server 2019 with the four stages, identification, artifacts collection, artifacts analysis, and documentation and presentation. in the study results, the evidence was extracted and determined using the temporal tables and extended events artifacts. references [1] p. a. carter, pro sql server 2019 administration, (2019). [2] d. korotkevitch, pro sql server internals, (2016). [3] l. davidson, pro sql server relational database design and implementation, (2021). [4] b. ward, sql server 2019 on linux, (2019). [5] b. nevarez, performance sql server, (2021). [6] a. p. jamdar, m. b. bhangire, s. g. shahari, and k. g. matere, an efficient framework for database forensic analysis., int. j. adv. eng. res. dev., 4 (5) (2017) 12634–12637 [7] m. kamaruzzaman, “top 10 databases to use in 2021,” towards data science, https://towardsdatascience.com/top-10-databases-to-use-in-2021-d7e6a85402ba (accessed jun. 11, 2021), (2021). [8] ghanayem, mark, w. rohm, and j. parente, temporal tables, microsoft. https://docs.microsoft.com/en-us/sql/relational-databases/tables/temporaltables?view=sql-server-ver15 (accessed jun. 17, 2021). (2016). [9] p. jayaram, temporal tables in sql server, sql shake, https://www.sqlshack.com/temporal-tables-in-sql-server/, (2019). [10] s. johnson, introducing sql server, (2015). [11] a. al-dhaqm et al., database forensic investigation process models: a review, ieee access, 8 (2020) 48477–48490. [12] r. bria, a. retnowardhani, and d. n. utama, five stages of database forensic analysis: a systematic literature review, proc. 2018 int. conf. inf. manag. international journal of applied sciences and smart technologies volume 5, issue 1, pages 1–16 p-issn 2655-8564, e-issn 2685-9432 16 technol. icimtech 2018, no. september, (2018) 246–250, [13] b. narwal, a walkthrough of digital forensics and its tools, march (2020) 13757– 13764, 2020. [14] j. wagner, a. rasin, k. heart, t. malik, j. furst, and j. grier, detecting database file tampering through page carving, adv. database technol. edbt, 2018-march, (2018) 121–132. [15] h. choi, s. lee, and d. jeong, “forensic recovery of sql server database: practical approach,” ieee access, 9 (2021) 14564–14575. [16] r. jason, a. wolter, and m. msft, extended events overview, microsoft, https://docs.microsoft.com/en-us/sql/relational-databases/extendedevents/extended-events?view=sql-server-ver15, (2019). [17] a. al-dhaqm, s. razak, s. h. othman, a. ngadi, m. n. ahmed, and a. a. mohammed, development and validation of a database forensic metamodel (dbfm), 12(2) (2017). [18] v. k. gupta, j. bonde, a. gorad, and p. joshi, duel security-detection of database modification attack and restore facility from unauthorized access, 7(6) (2020) 983–987. international journal of applied sciences and smart technologies international journal of applied sciences and smart technologies volume 1, issue 2, pages 179–188 p-issn 2655-8564, e-issn 2685-9432 179 designing independent automatic drinking water platforms at sanata dharma university muhammad prayadi sulistyanto * , ervan erry pramesta politeknik makatronika sanata dharma, yogyakarta, indonesia * corresponding author: prayadi.sulistyanto@gmail.com (received 23-01-2019; revised 31-10-2019; accepted 04-11-2019) abstract environmental pollution is increasing every year. from 2011 to 2014, environmental pollution in the special region of yogyakarta increased above 250%. the effect of environmental pollution is the decreasing availability of clean water. sanata dharma university as an institution engaged in the field of education seeks to provide clean water where clean water is suitable for drinking, namely with ro (reverse osmosis) technology. drinking water distribution has run well in sanata dharma university, but it lacks hygiene. in this study, researchers designing independent automatic drinking water platforms that could distribute clean water ready to drink for students with a certain dose. the result of this study is that an independent automatic drinking water platform can provide 200 cc of clean water ready for drinking in 9 seconds each time a user (student) uses this tool. keywords: ro, drinking water platform, automatic water international journal of applied sciences and smart technologies volume 1, issue 2, pages 179–188 p-issn 2655-8564, e-issn 2685-9432 180 1 introduction the level of environmental pollution in the special region of yogyakarta, increased above over the period of 2011 to 2014. the most common pollution in 2014 was air pollution, which occurred in 415 villages, while water pollution occurred in 44 villages and soil pollution occurs in 4 villages [1]. environmental pollution is increasing every year, it will be inversely proportional to the need for clean water which continues to increase, so that it is obeyed by pt aqua to open clean water plants, namely in the area of klaten. coinciding with world water day, the klaten aqua factory held the inauguration of embung tirtamulya located in pucang, tegalmulyo village, kemalang district, klaten to support water availability in a number of villages on the slopes of merapi [2]. various types of research related to efforts to reduce environmental pollution such as those conducted by novita sekarwati who conducted a study to reduce phosphate levels in laundry waste in the tambakbayan, catur tunggal, depok, sleman, yogyakarta [3]. another researcher, oki oktami yuda, also conducted a study to suppress environmental pollution by controlling the pollution of hotel wastewater in the city of yogyakarta in 2017 [4]. sanata dharma university, yogyakarta, indonesia as an institution engaged in the field of education also strives to provide clean water where clean water is suitable for drinking, namely with ro technology. drinking water distribution has run well in sanata dharma university, but it lacks hygiene in the supply of drinking water that is ready for consumption. so it takes a tool that can remove water cleanly, higinies and is able to remove water with the right dose and automatically. this paper is written in the following structure. part 2 describes the design of the tool. the method (steps) for making the tool will be described in section 3. the results of the research and discussion are written in section 4. this paper concludes with some conclusions. international journal of applied sciences and smart technologies volume 1, issue 2, pages 179–188 p-issn 2655-8564, e-issn 2685-9432 181 2 design the components used are a sensor, a in out "solenoid valve, a switch adjustable timer switch relay, and a relay. sensor is a sensor used to detect the presence of objects in front of it with a maximum detection distance of with the output type of this sensor being pnp [5]. timer switch relay adjustable module is a tool to set the lag time to turn on / off the equipment with a time lag of seconds. if you want a longer pause, you can replace the larger elco (c1). can be applied to alarms, time lags on/off lights and others according to creativity [6]. in this study, the electrical circuit used is as follows in figure 1. figure 1. electric independent automatic drinking water platform in figure 1 the circuit used is quite simple, using only 5 components which are arranged in such a way that they can produce results according to the purpose of this study. the essence of the automated automated platform is the control of the opening of the faucet (solenoid valve) so that ro water can be distributed with a certain dose. the workings of the independent automatic drinking water platform system are as follows: a. users (students) take a glass that has been provided, then bring it to the ro head and the presence of the hand will automatically be detected by the sensor . vcc (+12v) j3 solenoid valv e 1 2 k1 timer 3 54 12 j1 bj300-ddt 1 2 3 k2 relay 3 54 12 international journal of applied sciences and smart technologies volume 1, issue 2, pages 179–188 p-issn 2655-8564, e-issn 2685-9432 182 b. the active sensor will activate the timer, and simultaneously activate the solenoid valve and because the solenoid valve is active, ro water will flow into the glass. c. the active timer will count at a certain time, and if the specified time has been reached, the solenoid will die and the ro water will stop flowing. d. ro water will still stop flowing even though the hand still reads the sensor. e. ro water will also stop flowing when the hand is not read by the sensor even though the timer has reached the specified time. independent automatic drinking water platform is designed so that the users feel aesthetically comfortable. figure 2 shows a design drawing of 3 dimensions of independent automatic drinking water platform and figure 3 shows the dimensions of the tools and parts of an independent automatic drinking water platform. figure 2. design of an independent automatic drinking water platform international journal of applied sciences and smart technologies volume 1, issue 2, pages 179–188 p-issn 2655-8564, e-issn 2685-9432 183 figure 3. dimensions and parts of the independent automatic drinking water platform the independent automatic drinking water platform designed has been equipped with a clean glass container that is ready to use which is above the ro head, making it easier for users to take advantage of this ro drinking water service. this tool is also equipped with a lower cupboard that can be used to store clean glass stock. 3 method the course of the research on the design of an independent automatic drinking water platform of sanata dharma university is: a. literature review literature review is a study related to previous research and collects references , solenoid electric water valve in out "electric faucet, new dc . pull delay timer switch relay adjustable module and material related to this research. b. making hardware making hardware starts from the design of placement of adapters, valves and sensors so as to facilitate maintenance. international journal of applied sciences and smart technologies volume 1, issue 2, pages 179–188 p-issn 2655-8564, e-issn 2685-9432 184 c. data collection data retrieval is done by doing a timer test on the amount of ro water that comes out so that later can be determined the right time to produce the volume of ro water as expected. d. conclusion conclusions are the final results that refer to the research objectives. 4 results and discussion the results of the implementation of the design an independent drinking water platform are shown in figure 4. installation of the sensor under the spot where the glass is clean and the direction of the reading is directed to the end of the ro head so that the sensor can detect the glass at the end of the ro head. figure 4. independent automatic drinking water platform of sanata dharma university at campus iii paingan, maguwoharjo, depok, sleman, yogyakarta, indonesia international journal of applied sciences and smart technologies volume 1, issue 2, pages 179–188 p-issn 2655-8564, e-issn 2685-9432 185 testing the system of independent automatic drinking water platforms is carried out with a source ro water discharge of . the test is done three times with the same timer value and monitors the ro water output that comes out. table 1. the first test result of the independent automatic drinking water platform timer value amount of water that comes out table 2. test result of both independent automatic drinking water platforms timer value amount of water that comes out table 3. test results table of the independent automatic drinking water platform timer value amount of water that comes out international journal of applied sciences and smart technologies volume 1, issue 2, pages 179–188 p-issn 2655-8564, e-issn 2685-9432 186 the results of the tests carried out on an independent automatic drinking water platform, when the sensor detects and the timer is set to 1 second, ro water has not had time to flow as shown in tables 1-3. this is because there are 3 components (2 relays and 1 solenoid valve) which is active based on the principle of magnetic induction, so that when the timer setting is given 1 second, the solenoid valve is only active for a moment and ro water has not been able to flow. the next time setting is a multiple of 1 second and the test results get varied results. increasing the amount of ro water volume every second gets different results in each test. ro water discharge the output of an independent automatic drinking water platform in the first 3 seconds is . in the next second, the 4 to 6 seconds, the discharge of the ro water output from the independent automatic drinking water platform is . in the next second, which is 7 to 9 seconds, the ro water discharge from the independent automatic drinking water platform is . the conclusion obtained from the research is that the ro water discharge from the independent automatic drinking water platform every second increases. the first 3 seconds to 9 seconds experienced a significant increase, up . so to get results, it only takes 9 seconds as shown in tables 1-3. 5 conclusion the conclusions that can be drawn from the study entitled designing independent automatic drinking water platforms can distribute clean water ready to drink students with a certain dose well. independent automatic drinking water platform are arranged with a timer with an on time of 9 seconds, so that when the user (student) needs clean drinking water, just take the glass provided, and when the hand is detected by the sensor, the water will come out during 9 seconds at a rate of . when the hand is not detected by the sensor, the water will automatically stop flowing even though the timer has not reached 9 seconds. international journal of applied sciences and smart technologies volume 1, issue 2, pages 179–188 p-issn 2655-8564, e-issn 2685-9432 187 acknowledgements this research was financially supported by sanata dharma foundation. the author thanks the director of politeknik makatronika sanata dharma and innovation center politeknik mekatronika sanata dharma for some discussions. references [1] l. kertopati, pencemaran lingkungan di yogyakarta meningkat 250 persen, https://www.cnnindonesia.com/nasional/20161023224728-20-167372/pencemaranlingkungan-di-yogyakarta-meningkat-250-persen (accessed on 23-01-2019,15:03 wib). [2] https://aqua.co.id/pabrik-aqua-klaten-resmikan-embung-tirtamulya-untuk-dukungketersediaan-air-bersih, (accessed on 23-01-2019,15:40). [3] n. sekarwati, “penurunan kadar total phosphat (po4) pada limbah laundry dengan metode aerasi-filtrasi di dusun tambakbayan catur tunggal, depok, sleman, yogyakarta”, jurnal kesehatan masyarat, 11 (1), 2018. [4] o. o. yuda and e. p. purnomo, “implementasi kebijakan pengendalian pencemaran limbah cair hotel di kota yogyakarta tahun 2017”, jurnal administrasi publik: public administration journal, 8 (2), 2018. [5] http://www.autonicsonline.com/product/product&product_id=701 (accessed on 2401-2019,18:00 wib). [6] new dc 12v pull delay timer ne555 switch relay adjustable module, https://www.tokopedia.com/solarperfect/new-dc-12v-pull-delay-timer-ne555switch-relay-adjustable-module (accessed on 23-01-2019,19:10 wib). https://www.cnnindonesia.com/nasional/20161023224728-20-167372/pencemaran-lingkungan-di-yogyakarta-meningkat-250-persen https://www.cnnindonesia.com/nasional/20161023224728-20-167372/pencemaran-lingkungan-di-yogyakarta-meningkat-250-persen http://www.autonicsonline.com/product/product&product_id=701 international journal of applied sciences and smart technologies volume 1, issue 2, pages 179–188 p-issn 2655-8564, e-issn 2685-9432 188 this page intentionally left blank international journal of applied sciences and smart technologies international journal of applied sciences and smart technologies volume 2, issue 2, pages 91–110 p-issn 2655-8564, e-issn 2685-9432 91 raid level 6 and level 6+ reliability thomas schwarz department of computer science, klingler college of arts and sciences, marquette university, milwaukee, wisconsin, u.s.a. corresponding author: thomas.schwarz@marquette.edu (received 19-08-2020; revised 09-11-2020; accepted 09-11-2020) abstract storage systems are built of fallible components but have to provide high degrees of reliability. besides mirroring and triplicating data, redundant storage of information using erasure-correcting codes is the only possibility to have data survive device failure. we provide here exact formula for the data-loss probability of a disk array composed of several raid level 6 stripes. this two-failure tolerant is not only used in practice but can also provide a reference point for the assessment of other data organizations. keywords: raid level 6, robustness, reliability 1 introduction storage systems are built of a large number of fallible components. failure of some of these components might be independent or might be correlated. often, the causes of failures are obscure. for example, jim gray’s team at microsoft research once observed that disk reliability in an experimental database for astronomy remarkably increased after switching to slightly more expensive enclosures [1]. in this case, the vibrations of a failing disk drives caused its neighbors to fail with a much higher probability. failures can happen at all levels in the storage hierarchy. central components such as air conditioning or power supply can fail, and data centers can be flooded. individual international journal of applied sciences and smart technologies volume 2, issue 2, pages 91–110 p-issn 2655-8564, e-issn 2685-9432 92 racks can also suffer failure of networking and electricity supply. individual disks can fail in toto or suffer latent sector failures. these incidents happen at surprisingly high rates even in large state-of-the art data centers [2, 3, 4]. to protect against failures, data has to be stored redundantly. with the recognition of the importance of latent sector failures, intra-disk redundancy and other schemes were introduced [5], but it was also recognized that protection against single failure is not sufficient. however, protection against single disk failures date back much further. originally termed redundant arrays of inexpensive disks, redundant arrays of independent disks (raid) originally were proposed to allow parallel i/o but were quickly recognized to solve an even more important problem, namely the tendency of magnetic hard drives to fail, often without warning. the original raid architecture added a parity disk to an ensemble of data disks. this parity disk would store the bit-wise exclusive-or of parallel sectors in the data disks in its sectors. when a data disk failed or became otherwise inaccessible, its contents could be recovered by reading parallel sectors in all surviving disks as the bit-wise exclusive-or of the sector contents. this arrangement is called a raid level 4. the final architecture, the raid level 5, would rotate the roll of parity disk among disks and thus do away with the differences in the load. a dedicated parity disk is written with each update to any of the data disks, but not read with any look-up, so that its load is the same as that of a dedicated data disk only if the read and write loads of the ensemble fulfill a certain linear equation. nevertheless, some important players in the storage industry use dedicated parity disks. not only the presence of latent sector errors (discovered only upon trying to access a disk sector) but also the coupling of complete disk failures motivated the introduction of higher degrees of failure tolerance. the raid level 6 adds one more parity disk to the ensemble. the contents of the new parity disk have to be calculated using a more involved code, usually defined using galois field operations, though many other possibilities exist [e.g. 6, 7]. since its inception, the architecture of failure tolerating disk arrays has undergone constant innovation and more erasure correcting codes and disk array layouts are proposed [8, 9]. to evaluate their effectiveness, a disk array consisting of several raid international journal of applied sciences and smart technologies volume 2, issue 2, pages 91–110 p-issn 2655-8564, e-issn 2685-9432 93 level 6 is used as a point of comparison. it seems however that no single calculation of raid level 6 reliability is published. this short contribution therefore aims at rectifying this lacuna. it is in part motivated by reviewing papers that make mistakes in the assessment of disk array reliability. we are right now experiencing a revolution in the storage industry where flash replaces disk and maybe soon pcm or another nonvolatile memory architecture replaces both. our analysis will remain valid independent of the chosen underlying technology. figure 1. a disk array made up of five stripes organized as a level 6 raid. each stripe consists of six data disks, a p-parity disk, and a q-parity disk. 2 failure tolerance of an ensemble of raid level 6 with or without distributed parity we first look at the classic level 6 raid, depicted in figure 1. the raid consists of stripes ( in the figure), each with data disks and 2 parity disks. in figure 1, we have two types of parity disks, a p-parity, calculated as the bitwise exclusive-or of the data disks in the stripe, and a q-parity, calculated using galois field operations. if we distribute parity, we no longer have dedicated parity disks, but each disk will have pand q-parity blocks in equal proportion. we will also look at two variants. the first variant uses declustering [10], where all disks are divided into disklets of equal size. to mn of the disklets in parallel position, we assign the role of data disklet, to we assign the role of p-parity disklet, and to the remaining we assign the role of q-parity disklet in a manner that the organization of figure 1 is true at the disklet level. before declustering, a single or double failure in a stripe made us read almost all or all of the disks in the reliability stripe. this recovery load is thus limited to international journal of applied sciences and smart technologies volume 2, issue 2, pages 91–110 p-issn 2655-8564, e-issn 2685-9432 94 a single stripe. in the declustered version, this recovery load is evenly distributed over all surviving disks. on the negative side, if three disks have failed, then it is likely (to be made more precise below) that three disklets that have failed are located in the same stripe in one of the many configurations. thus, the possibility to withstand failures is smaller, but on the other side, recovery is faster. the second variant adds a second q-parity to each stripe. since there are too many contenders for the title of raid level 7, i just call the resulting organization a level 6+ raid. now, the disk array architecture can withstand up to and including three failed disks in a single stripe without losing data. a. reliability of the non-declustered level 6 raid we first calculate the robustness, that is, the probability with which an ensemble such as that shown in figure 1 with stripes of disks can withstand individual disk failures. clearly, in order to not have lost any data, each of the stripes can have at most two failures. we call this selection of f disks a benign failure pattern. let be the number of stripes with one failed disk and be the number of stripes with two failed disks in a benign failure pattern. clearly, or, trivially equivalent, . we now count the number of benign failure patterns with failed disks. we set . we first select stripes with a single failure out of the m stripes. then we select stripes out of the remaining stripes. finally, we select one of the disks in each of the stripes and two of the disks in each of the stripes. this gives us for the number of good patterns ∑ ( ) ( ) ( ) ( ) the probability of data loss is given by ( * as the opposite probability of the quotient of benign failure patterns of size f over all failure patterns of size f. despite its seeming complexity, mathematical software like international journal of applied sciences and smart technologies volume 2, issue 2, pages 91–110 p-issn 2655-8564, e-issn 2685-9432 95 mathematica or software developed using infinite integer precision as afforded by python is able to calculate exactly, though not instantaneously. for small f, we can obtain formulae that are easier to evaluate. for , the probability of data-loss is zero since the disk array can tolerate always two failures. for , it is easier to calculate the malignant failure patterns. three disks can only cause data-loss if they are all located in the same stripe. we model the failures as occurring serially in time. the first failure then determines a stripe. the next two failures need to fall into the same stripe. there are ( ) possibilities to form this malign pattern out of ( ) possibilities to select two additional disks to fail. thus, this happens with probability ( ) ( ) for , a malign failure pattern can have either four failed disks in the same stripe or three failed disks in the same stripe and another disk somewhere in another stripe. this gives ( * ( * malign patterns and the corresponding data-loss probability evaluates to . b. reliability of the non-decultered level 6+ raid we extend our methodology to the three-failure tolerance of each single stripe. the number of benign failure patterns is obtained by first selecting positive integers and such that , representing stripes with one, two, and three different failures. then we select stripes out of m, then stripes out of the remaining , then out of the remaining stripes, (or ( ) so far), and then for each of the stripes with one failure, one out of , for each of the stripes with two failures, two out of , and finally, for each of the stripes international journal of applied sciences and smart technologies volume 2, issue 2, pages 91–110 p-issn 2655-8564, e-issn 2685-9432 96 with three failures, three out of disks, giving us ∑ ( * ( * ( * figure 2. data loss probability of a raid level 6 given failures. the array has stripes and data disks and two parity disks per stripe figure 3. data loss probability of a raid level 6 given failures. the array has stripes and data disks and two parity disks per stripe. international journal of applied sciences and smart technologies volume 2, issue 2, pages 91–110 p-issn 2655-8564, e-issn 2685-9432 97 we present an example for raid level 6 in figure 2 and for raid level 6+ in figure 3. c. reliability of declustered raid there are many ways to decluster [10] a disk array. sometimes, people have defined layouts, assuming that all disks are equal. in modern datacenters, it is more likely that binary large objects (blob) or streams are being stored by grabbing disklets on different data disks and adding two or three parity disklets and then repeating the process until the object or the stream has been stored. the first method has the advantage that we can calculate data-loss probability exactly, but we can then use the result in the former case in order to approximate the second. therefore, we assume here a pre-defined layout. in more detail, we assume that all disks in the disk array have equal size. since reliability striping at the individual sector level (4kb) would require much meta-data and gain little in dividing recovery workload among all disks in the array, we assume that each disk is split into disklets. in order to distribute workload, the number d of disklets should be at least in the hundreds, but it does not need to be much more. figure 4. average deviation from fair entanglement in dependence on the number d of disklets. international journal of applied sciences and smart technologies volume 2, issue 2, pages 91–110 p-issn 2655-8564, e-issn 2685-9432 98 indeed, a disklet on a healthy disk in a disk array with one failed disk (the most common scenario) would be potentially involved only if the disklet is in the same stripe as the corresponding disklet in the failed disk. therefore, the disklet is potentially involved with probability ⁄ , where, as we recall, is the number of stripes. in a raid level 6 or 6+, we only need to read of the remaining healthy and surviving disks, respectively, which gives the file system some choice, that can be used for further balancing. even without taking advantage of this, the number of times t that a disklet on a healthy disk is entangled with the failed disk is governed by the binomial distribution: ( * ( * ( * the proportion of entangled disklets on a disk has expected value , but the average disk has a proportion at distance √ √ √ which quickly approaches zero, as we can see in figure 4. there, we depict the average deviation of disk from optimal entanglement. by multiplying this number with three, we get bounds on the difference to optimal entanglement in which approximately of all disks fall. because the numbers in figure 4 are already quite small for d in the midhundreds, we can assume this magnitude for d. operationally, the larger d, the more metadata needs to be kept, but also the longer it takes for a stream to fill up a disklet. in our context more importantly, the capability of a declustered raid level 6 or raid level 6+ to survive one additional disk failure beyond their guaranteed tolerance (of two and three, respectively) depends on d. if a configuration without declustering tolerates f failures with a probability of , then its declustered equivalent with d disklets per disk survives with a probability of . because data-loss probabilities with in the case of raid level 6 and international journal of applied sciences and smart technologies volume 2, issue 2, pages 91–110 p-issn 2655-8564, e-issn 2685-9432 99 in the case of raid level 6+ are so low, the data-loss probability of the declustered raid is higher than what one (or at least me) would intuitively guess. figure 5. dataloss probability in the presence of three failed disks in a declustered raid level 6 with m stripes and n data disks per stripe in dependence on d, the number of disklets per disk figure 6. dataloss probability in the presence of four failed disks in a declustered raid level 6+ with m stripes and n data disks per stripe in dependence on d, the number of disklets per disk. international journal of applied sciences and smart technologies volume 2, issue 2, pages 91–110 p-issn 2655-8564, e-issn 2685-9432 100 figure 7. dataloss probability in the presence of five failed disks in a declustered raid level 6+ with m stripes and n data disks per stripe in dependence on d, the number of disklets per disk. as we can see from figures – , the data-loss probability in the declustered layout are substantial, but not zero for moderate values of d. this shows that a thorough reliability analysis of declustered disk arrays underestimates their reliability if it assumes that the array cannot tolerate more failures than its guaranteed tolerance, i.e. two in the case of level 6 and three in case of level 6+. 3 markov modeling reliability should not exclusively be assessed by using data-loss probabilities in dependence on failed disks. the true measures of reliability are mean time to dataloss (mttdl), the survival rate of data for the economic life-span of a disk array, or the annual loss rate. we obtain these numbers using a markov or a semi-markov model or even a petri net. in this section, we give the formula for calculating transition probabilities and then apply it to calculate the mttdl for declustered arrays using the usual simplifying assumptions. international journal of applied sciences and smart technologies volume 2, issue 2, pages 91–110 p-issn 2655-8564, e-issn 2685-9432 101 figure 8. generic markov model a. a generic markov model we should capture the current condition of a disk array in as few possible states as possible in order to simplify calculations. often, it is possible to characterize the state of a disk array just by the current number of unavailable disks. the result is a state diagram as that presented in figure 8. the normal, start state is state 0. there is also an absorbing state state f that characterizes a disk array that has lost data. the other states in figure 8 represent situations where the array has suffered loss of access to a disk, but where all data stored in it can be recovered. state is a state where disks have failed. we have a failure transition from state to state corresponding to the failure of an additional disk, if that failure does not lead to loss of data, and another transition from state to state in case that the failure has induced data-loss. if data on a disk has been recovered and saved on a spare or replacement disk, then we have a repair transition from state to state . this model cannot capture all scenarios. for example, a disk array might have a small number of spared disks, but in case of failures, it might take a technician maybe weeks to replace the failed disks with new spare disks. the rate at which transitions are taken in model reflect more or less reality. in the case of failures, a common first-degree approximation is the assumption of exponential failure times. in this case, the probability that a functioning disk fails at a given small time interval is constant. (thus, the hazard rate in the lingo of reliability theory is constant.) it is however known that disk failures times are modeled with more accuracy using a weibull distribution, or even better, using real disk lifespans [11]. international journal of applied sciences and smart technologies volume 2, issue 2, pages 91–110 p-issn 2655-8564, e-issn 2685-9432 102 much more problematic is the modeling of repair times as exponential repair times are hardly realistic. an exponential repair time assigns positive probabilities to physically impossible short repairs and very lengthy repairs. if technicians with normal working hours are involved, times to repair will also depend on the time that the failure occurred. if both failures and repairs are exponentially distributed, then the state diagram of figure 8 represents a true markov model. for the researcher, this model is very attractive because it sometimes allows closed-form solutions for mean time to data-loss and similar measures, and failing that, simpler method for numerical solutions. b. calculating state transition probabilities the rate with which failure transitions are taken depends on the number of disks that can fail and the probability that an additional failure will lead to data loss. in the simple case of a markov model, these are the only contributors. we call the probability that an additional disk failure in state leads to data loss. (this is not the disk failure since the system might have experienced a successful repair.) | if denotes the probability that the system has suffered data-loss in state , then we have thus, if we assume exponential repair times that are independent of each other and exponential failure times, this gives us a markov model with: 1) non-failure states, where state represents a disk array with failed disks. 2) a failure state, state , which is absorbing. 3) the start state is state 0. international journal of applied sciences and smart technologies volume 2, issue 2, pages 91–110 p-issn 2655-8564, e-issn 2685-9432 103 4) there is a repair transition from state to state taken with rate and fixed “repair rate” . 5) there is a failure transition not representing data-loss from state to state taken with rate . 6) there is a failure transition representing data-loss from state to state taken with rate c. mttdl calculation for the markov model if we collect the current probabilities of being in state at time in an -dimensional column vector and the transition rates into a transition matrix , then we have the fundamental chapman-kolmogorov equation the probability that a system has not suffered data-loss at time is given by the sum of all coefficients of , namely with a transposed column vector containing one-coefficients. the mttdl of the system is then given as ∫ ( ) ∫ ( ) ∫ ∫ by applying the laplacian transform to the fundamental differential equation, we obtain and after setting the new indeterminate to zero, the definition of the laplace transform gives us international journal of applied sciences and smart technologies volume 2, issue 2, pages 91–110 p-issn 2655-8564, e-issn 2685-9432 104 ∫ we then multiply with in order to obtain ∫ ∫ the value of is of course since the system starts out in state 1. it should be noted that the calculation of the inverse is not without numerical challenges as the magnitude of repair transitions is much larger than the magnitude of failure transitions, so that matrix is ill-conditioned. however, this applies only to not using good software for numerical inversion and in many cases, just using double precision suffices for reasonable precision. d. example: declustered raid levels in the case of a highly declustered raid level 5 (one parity per stripe), a raid level 6 (two parities per stripe), and a raid level 6+ (three parities per stripe), the markov models are fairly simple, because then any more than one, two, and three failures, respectively, have to lead to data-loss. in the case of raid level 5, the fundamental differential equation for the state vector made up of the probabilities of the system being in state 0 or state 1 ( * is simply the first addend in the first equation corresponds to the failure transition, taken with rate , from state 0 to state 1, the second summand to the repair transition, taken with international journal of applied sciences and smart technologies volume 2, issue 2, pages 91–110 p-issn 2655-8564, e-issn 2685-9432 105 rate from state 1 to state 0, the first summand in the second equation to the failure transition from state 0 to state 1 seen above, the second summand to the failure transition taken with rate , from state 1 to the failure state, and the last addend to the repair transition from state 1 to state 0 again. the transition matrix is therefore ( * so that the mttdl is simply ( * ( ) in the case of a raid level 6, our transition matrix needs to capture transitions between three non-failure states and has therefore shape . in particular, we obtain the mttdl as ( ) ( + in the case of the raid level6+ with a very high degree of declustering, we can assume that the storage system will suffer data-loss whenever four or more disks are currently not functioning. in this case, the markov model becomes particularly easy and the mttdl is given below, where is the individual disk failure rate, that is, the inverse of the expected mean time to failure (mttf) of a disk, where is the repair rate, that is, the inverse of the expected mean time to repair (mttr), and the number of disks. the mttdl is then ( ) ( , which is given in closed form as international journal of applied sciences and smart technologies volume 2, issue 2, pages 91–110 p-issn 2655-8564, e-issn 2685-9432 106 figure 9. mean time to data loss in hours for a fully declustered level 5 (top), level 6 (middle) and level 6+ (bottom) raid in dependence on the mttf of each of the individual disks. the scales are both logarithmic. international journal of applied sciences and smart technologies volume 2, issue 2, pages 91–110 p-issn 2655-8564, e-issn 2685-9432 107 to illustrate this, figure 9 gives the mttdl in hours for an ensemble of 90 disks, for mttf of disks ranging between 1000 hours and 1 million hours. the mttr are 8 hours, 36 hours, and 100 hours. e. discussion mttdl figures are among the least useful figures of merit for the reliability of a disk array, but have the singular advantage of often being able to be derived exactly. other figures of merit such as the probability to survive the economic lifespan of a disk array without data-loss are more important. the fantastic figures (a mttdl of 100 million years! with normal disks) reflect in part the lack of modeling of catastrophic events and essential component failure and in part the implicit assumption that disks fail independently. numerically, e.g. using the euler method to solve the fundamental differential equation of the markov model, we can derive numbers for the survival probability of the data in an array for 5 years. another, serious limitation of the current analysis is the concentration only on disk failures. however, as long as the merits of various disk array layouts are compared, this is more of a feature than a bug. with other words, even if one converts the mttdl numbers into a failure rate per year, the numbers are not realistic, but they still serve to compare the different reliability merits of disk arrays. it is feasible to expand the markov model to model latent sector errors and disk scrubbing, but this goes far beyond the goals of this paper. it is also possible to use markov models in order to deal with infant disk mortality [12]. all these possibilities run soon into the difficulty of making good modeling assumptions. furthermore, if one is interested in actual failure rates and not in the relative merits of disk array layouts, one might have to use actual failure statistics or at least change the markov model to a semi-markov model in order to model the more generally applicable weibull failure [11]. finally, while the large-scale storage system of the foreseeable future might be disk based, the superior performance of flash and soon pcm memories will see the expansion of this type of very fast storage. what we have learned about device failure behavior from disks might or might not transfer to calculating the reliability of these devices. international journal of applied sciences and smart technologies volume 2, issue 2, pages 91–110 p-issn 2655-8564, e-issn 2685-9432 108 4 conclusion we defined two classical extensions of the raid level 5 and derived formulae for the capability of these disk arrays to survive a given number of disk failures. we then showed how to use these numbers in order to derive one particular figure of merit for the resilience of a disk array organized in this manner. no model reflects reality, but some are useful, as the saying goes. we have then shown by example how to use these calculations to determine the mean time to failure of various declustered raid levels. raid level 6 and level 6+ can serve as a reference point in order to compare the resilience resulting from using one of the many more sophisticated codes that trade off a slight increase in parity storage overhead for faster, average recoveries. references [1] j. gray, personal communication. [2] l. bairavasundaram, g. goodson, s. pasupathy, and j. schindler, “an analysis of latent sector errors in disk drives.” in sigmetrics performance evaluation review, 35 (1), acm, 289–300, 2007. [3] bianca schroeder, sotirios damouras, and phillipa gill, “understanding latent sector errors and how to protect against them.” acm transactions on storage (tos) 6 (3), 1–23, 2010. [4] b. schroeder and g.a. gibson, “understanding disk failure rates: what does an mttf of 1,000,000 hours mean to you?” acm transactions on storage (tos) 3 (3), 1–16, 2007. [5] a. dholakia, e. eleftheriou, x.y. hu, i. iliadis, j. menon, and k. k. rao, “a new intra-disk redundancy scheme for high-reliability raid storage systems in the presence of unrecoverable errors.” acm transactions on storage (tos) 4 (1), 1–42, 2008. [6] m. blaum, j. brady, j. bruck, and j. menon, “evenodd: an efficient scheme for tolerating double disk failures in raid architectures.” ieee transactions on computers 44 (2), 192–202, 1992. international journal of applied sciences and smart technologies volume 2, issue 2, pages 91–110 p-issn 2655-8564, e-issn 2685-9432 109 [7] j.f. pâris, a. amer, and t. schwarz, “low-redundancy two-dimensional raid arrays.” international conference on computing, networking and communications (icnc), 507–511, ieee, 2012. [8] p. chen, e. lee, g. gibson, r. katz, and d. patterson, “raid: high-performance, reliable secondary storage.” acm computing surveys (csur) 26 (2), 145–185, 1992. [9] t. schwarz and w. burkhard, “reliability and performance of raids.” ieee transactions on magnetics 31 (2), 1161–1166, 1995. [10] m. holland and g.a. gibson, “parity declustering for continuous operation in redundant disk arrays.” acm sigplan notices 27, proceedings of asplos 92 (9), 23–35, 1992. [11] j. elerath and m. pecht. “a highly accurate method for assessing reliability of redundant arrays of inexpensive disks (raid).” ieee transactions on computers 58 (3), 289–299, 2008. [12] q. xin, q., t. schwarz, and e. miller. “disk infant mortality in large storage systems.” in 13th ieee international symposium on modeling, analysis, and simulation of computer and telecommunication systems, 125–134, 2005. international journal of applied sciences and smart technologies volume 2, issue 2, pages 91–110 p-issn 2655-8564, e-issn 2685-9432 110 this page intentionally left blank international journal of applied sciences and smart technologies international journal of applied sciences and smart technologies volume 2, issue 1, pages 45–58 p-issn 2655-8564, e-issn 2685-9432 45 area under the curves, volume of rotated, and surface area rotated about slanted line billy suandito department of primary teacher education, musi charitas catholic university, palembang, indonesia corresponding author: billy_s@ukmc.ac.id (received 29-06-2019; revised 05-08-2019; accepted 05-08-2019) abstract in calculus, a definite integral can be used to calculate the area between curves and coordinate axes at certain intervals; surface area and volume formed if an area is rotated against the coordinate axis. problems arise if you want to calculate the area of the area bounded by a curve and a line that does not form an angle of 00 or 900 to the coordinate axis, as well as the calculation of the volume of objects and the surface area of a revolution axis of rotation is a slanted line. by using existing definitions, a formula is developed for this purpose. this paper produces a finished formula, to make it easier for calculus users who do not want to know the origin method. the method used is only the method commonly used. what's new is that this formula hasn't been published in tertiary institutions or universities in indonesia. keywords: the area under the curve, the volume of the rotating object, the surface area of the rotating object, the slanted line axis international journal of applied sciences and smart technologies volume 2, issue 1, pages 45–58 p-issn 2655-8564, e-issn 2685-9432 46 1 introduction mathematics is one of the subjects given to students from elementary school to university. one part of mathematics is calculus. calculus has been given since high school to university, especially the study program related to mathematics and science. discussion on calculus consists of functions, limits, derivatives, derivative applications, anti-derivatives, indeterminate integrals, application of definite integrals, and integration techniques [3]. integral is one of the sciences in the field of mathematical analysis that continues to grow rapidly, both theoretically and its applications. the application of the integral includes calculating the area of the average, the volume of the rotating object, and the area of the rotating surface. the volume of the rotating object and the swivel surface area is obtained by rotating the area of the flat plane rotated on a rotary axis [6]. using the theory of calculating the area bounded by curves, x axis or y axis in interval commonly studied in calculus, likewise the volume of a rotating object occurs if a curve is rotated against the x axis or y axis, or the surface area of an object that occurs when an area is rotated against the x axis or y axis [1,3]. problems arise, if we want to calculate the area under the curve and a sloping straight line, that is the line that forms an angle to the positive x axis with the area of the rotating object and the volume of the rotary object if an area is rotated against a slanted line as shown in figure 1 [4,6]. figure 1. area-area between with line in interval r a b f(x) y= mx+b x y international journal of applied sciences and smart technologies volume 2, issue 1, pages 45–58 p-issn 2655-8564, e-issn 2685-9432 47 this paper is intended to formulate a formula to calculate the area under a curve, the volume of a rotating object, and the area of a rotating object against a slanted line. the position of this paper is complementary to the existing one and also to make it easier for calculus users to calculate the area under the curve up to the slanted line, the surface area of the rotating object against the slanted line, and the volume of the rotating object towards the slanted line. 2 research methodology following are a few things to keep in mind in coding equations for submission to this article is a literature review. the steps of writing are as follows 1. reviewing calculus textbooks, purcell, thomas, and james stewart, an inseparable part of the application to calculate the area under the curve until one of the coordinate curves, rotating objects that occur when rotated in the area passed by the coordinates. 2. from purcell, thomas, and james stewart's books, only in james stewart's book found an assignment to determine the area under the curve towards the slanted line. 3. from this, developed to determine the surface area and volume that occurs if the area is rotated towards the slanted line. 4. subsequently tested on students participating in the calculus class majoring in industrial engineering and information engineering. due to time constraints, this article only contains additions to calculate the area under the curve of a slanted line, surface area and volume of objects that occur, if there is an area rotated 3600 towards a slanted line. 3 results and discussions by using integral calculus we can calculate the area as in figure 2. note the area as shown by figure 2. the region r is bound by the x axis, the lines and , and the curve having the equation with f a continuous function at a closed international journal of applied sciences and smart technologies volume 2, issue 1, pages 45–58 p-issn 2655-8564, e-issn 2685-9432 48 interval to easily fetch , for all in . we will assign a value of as the size of figure 2. area under the curve until the axis firstly specify the polygon area in the closed hose is divided into n part of the interval of the part. to make it easier, each section hose has the same length, for example so . state the endpoint of each of these intervals by ; with , ; . . . name the first part of the hose expressed by since continuous function at closed hose then continuously at each part interval [2]. according to the extreme value theorem, there is a number in each interval, at that point reaches the absolute minimum. suppose that in the interval the part of this number is such that is the absolute minimum value of in the interval section . note rectangular pieces, each of which has a width of unit length and height unit length. suppose area unit states the total area of rectangular pieces, then: or ∑ f(x) r a b y x international journal of applied sciences and smart technologies volume 2, issue 1, pages 45–58 p-issn 2655-8564, e-issn 2685-9432 49 addition to the right hand side of the formula above gives the area of rectangles. definition 1 : suppose the function is continuous in the closed ], with for each and that is the area bounded by the curve axis, and lines and . the hose is divided into interval pieces, each in length ; and the interval of the second part is expressed by is the absolute minimum value of the function in the interval part , the size of area is given by [5] ∑ definition 2 : if is a function defined in the closed interval then certain integrals from , ∫ are give by ∫ , if the limit exists. definition 3 : suppose the function is continuous in the closed interval with or each in and that is the area bounded by the curve axis, and lines and , then size of area is given by: ∫ suppose the function is continuous at the closed interval with for each in and that is a solid object obtained by rotating the area bounded by the curve , the axis, and the lines and with respect to the axis, then the size is the volume of area given by ∫ ∑ international journal of applied sciences and smart technologies volume 2, issue 1, pages 45–58 p-issn 2655-8564, e-issn 2685-9432 50 definition 4 : suppose the function is continuous on the closed interval with for each in and that is the surface obtained by rotating regions which are limited by the curve axis, and lines and with respect to axis, then size from area is given by [7] : ∫ ∑ from the above definitions, starting discussion with including area under the curve until the slanted line, surface area of rotating objects against the slanted line , and volume of rotating objects against the slanted line. a. area under the curve until the slanted line , as shown in figure 3: figure 3. small pieces perpendicular to the line take the area around , , enlarged like figure 4 figure 4. section of the rectangle area international journal of applied sciences and smart technologies volume 2, issue 1, pages 45–58 p-issn 2655-8564, e-issn 2685-9432 51 angle , if then (1) figure 5. the area around δu is enlarged from figure 5, (2) subtituted (1) to (2), we get (3) because so (3) become { } { } (4) international journal of applied sciences and smart technologies volume 2, issue 1, pages 45–58 p-issn 2655-8564, e-issn 2685-9432 52 hence tan is a gradient of tangent, then tan and so (4) become (5) from figure 5, or (6) if , then subtitued (5) and (6) get . because and so . from definition are we get ∫ (7) this formula use for calculate area the curve until the slanted line. b. area of rotating objects against the slanted line if the slice is rotated against the line a thin cylinder is formed with t as the radius and as the height, so that the surface area is with subtitute (2) , (6), √ √ get ∫ √ √ (8) this formula is used for calculate area of rotating against the slanted line. international journal of applied sciences and smart technologies volume 2, issue 1, pages 45–58 p-issn 2655-8564, e-issn 2685-9432 53 c. volume of rotating objects now discuss for to get formula of volume of rotating objects against the slanted line. if the slice is rotated against the line , it will form a thin cylinder with t as the radius and as the height, so the volume is with subtitued (4), (6) , and √ get and ∫ (9) this formula is used for calculate volume of rotating against the slanted line. d. example now for example, the area will be determined by the line , as shown in figure 6. the line and the line perpendicular to drawn from point . solution: figure 6. visualizes the sample count international journal of applied sciences and smart technologies volume 2, issue 1, pages 45–58 p-issn 2655-8564, e-issn 2685-9432 54 equation of line of is , gradient so area counted using formula (7) ∫ ∫ ∫ ( ) ( ) [ ] the form of wake occurs is trapeze with measure of edge parallel is √ and √ and height √ . area of trapeze is sum of edge paralels accrossed by heightso given ( √ √ ) √ = √ √ for the area rotated against the line , the object that occurs is the surface of the cone. the area can be calculated using the formula of a large cone blanket area minus the size of a small cone blanket. the cone blanket area is calculated by formula where is the area of a cone blanket is the radius of the cone base s is a cone painter line (hypotenuse). for the example of this count the large cone base radius is √ and is √ and the small cone base radius is √ and the painter's line length is √ . √ √ √ √ √ international journal of applied sciences and smart technologies volume 2, issue 1, pages 45–58 p-issn 2655-8564, e-issn 2685-9432 55 using formula 8 we get ∫ √ √ ∫ √ √ √ √ ∫ √ √ √ while he volume of objects that occur is a cone that is stuck. using formula , we count volume big cone minus volume of a little cone. √ ( √ ) √ √ √ √ √ using formula 9 get ∫ ∫ √ international journal of applied sciences and smart technologies volume 2, issue 1, pages 45–58 p-issn 2655-8564, e-issn 2685-9432 56 √ √ √ 4 conclusions from the discussion of examples of counts, it can be seen that using formulas 7, 8, and 9 are not different by using the formula of flat building and space that is commonly used. thus the area under the curve to the straight line is known not horizontally or vertically and the rotating surface area and the volume of rotary matter can be calculated using formulas (7), (8), and (9). suggestions for future researchers, can conduct learning research, valid or not the formula. acknowledgements the author would like to thank his colleagues who have helped prepare everything needed so that this paper can be realized. references [1] p. ferdias, and e. a. savitri, ”analisis materi volume benda putar pada aplikasi cara kerja piston di mesin kendaraan roda dua”, jurnal pendidikan matematika aljabar, 6 (2), 177 182, 2015. [2] purcell, et al, “calculus and analityc geometry”, pearson, new jersey, 9th edition, 2007. [3] y. romadiastri, “penerapan pembelajaran kontekstual pada kalkulus 2 bahasan volum benda putar”, jurnal phenomenon, 1 (1), 131 143, 2013. [4] m. s. rudiyanto dan s. b. waluya, “pengembangan model pembelajaran matematika volume benda putar berbasis teknologi dengan strategi konstruktivisme student active learning berbantuan cd interaktifkelas xii”,jurnal matematika kreatif-inovatif, 1 (1), 33 44, 2010. international journal of applied sciences and smart technologies volume 2, issue 1, pages 45–58 p-issn 2655-8564, e-issn 2685-9432 57 [5] j. stewart, “calculus”, thomson brooke, beltmont, 5th edition, 2007. [6] sumargiyani, “penerapan pembelajaran kontekstual pada pembahasan volume benda putar dengan pembelajaran kontekstual”, prosiding seminar nasional matematika dan pendidikan matematika, 2006. [7] m. d. weir, and j. hass, ”thomas’calculus”, pearson, boston, 12th edition, 2010. international journal of applied sciences and smart technologies volume 2, issue 1, pages 45–58 p-issn 2655-8564, e-issn 2685-9432 58 this page intentionally left blank international journal of applied sciences and smart technologies international journal of applied sciences and smart technologies volume 2, issue 2, pages 169–178 p-issn 2655-8564, e-issn 2685-9432 169 the lagrangian and hamiltonian for rlc circuit: simple case albertus hariwangsa panuluh department of physics education, faculty of teacher training and education, sanata dharma university, yogyakarta, indonesia corresponding author: panuluh@usd.ac.id (received 11-04-2020; revised 06-06-2020; accepted 12-06-2020) abstract the lagrangian and hamiltonian for series rlc circuit has been formulated. we use the analogical concept of classical mechanics with electrical quantity. the analogy is as follow mass, position, spring constant, velocity, and damp ing constant corresponding with inductance, charge, the reciprocal of capac itance, electric current, and resistance respectively. we find the lagrangian for the lc, rl, rc, and rlc circuit by using the analogy and find the ki netic and potential energy. first, we formulate the lagrangian of the system. second, we construct the hamiltonian of the system by using the legendre transformation of the lagrangian. the results indicate that the hamiltonian is the total energy of the system which means the equation of constraints is time independent. in addition, the hamiltonian of overdamping and critical damping oscillation is distinguished by a certain factor. keywords: rlc circuit, lagrangian, hamiltonian, legendre transformation international journal of applied sciences and smart technologies volume 2, issue 2, pages 169–178 p-issn 2655-8564, e-issn 2685-9432 170 1 introduction lagrangian and hamiltonian are topics part of analytical mechanics or classical mechanicscourse that s studied in the second or third year of the undergradute physics program.some advantages are obtained when using lagrangian and hamiltonian formalismto solve the mechanical problems. the lagrangian and hamiltonian formalism solve mechanicalproblems using the energy of the system, which is a scalar quantity. differentfrom newtonian mechanics which solve the mechanical problem using force, which is avector quantity [1]. the rlc circuit is studied as part of the electricity and magnetism course by physics students as well as physics education students. the problem discussed in physics textbooks about the properties of electric currents and the potential difference or voltage that are in the electronic devices on the circuit [2]. moreover, the rlc circuit becomes an example in the differential equations subtopics in the mathematical physics course [3]. however, the rlc circuit is not only limited to the topic of electricity or solving differential equations. current rlc research is now heading toward the quantum level. this is due to the rapid development of nanotechnology so that nanosized electronic devices are being developed. theoretical research about nanoelectronic science has been conducted and has result in quantum transport, computational nanoelectronics, new device concepts, and novel transport physical phenomena found in small structures [4]. the quantum mechanics of the rlc circuit which obtained the approximation of energy eigenvalues in terms of a dimensionless parameter has also been studied [5]. one of the methods to quantize the rlc circuit is using the hamiltonian formalism. for example, using cardirola-kanai hamiltonian and quantum invariant method to solve the schrdinger equation for the rlc circuit [6]. for advance, there is also a research quantum mechanical effect of the underdamped, critically damped, and overdamped electric circuits with a power source using a time dependent hamiltonian operator [7]. this research aims to formulate the hamiltonian extension for the series rlc circuit without emf source, which is the simple case. we formulate the hamiltonian by using international journal of applied sciences and smart technologies volume 2, issue 2, pages 169–178 p-issn 2655-8564, e-issn 2685-9432 171 the legendre transformation of the lagrangian from [8]. in addition, for educational purpose, this research enriches classical mechanics learning materials based on the lecturer’ research. 2 reasearch methodology we formulate the hamiltonian theoretically based on the lagrangian from [8]. first, we formulate the lagrangian of series rlc circuits by using the analogy between classical mechanics with electrical quantity [9]. these analogies are shown in table 1. table 1. analogy between mechanical quantity and electrical quantity mechanical quantity electrical quantity mass ( ) inductance ( ) position ( ) charge ( ) spring constant ( ) reciprocal of capacitance ( ) velocity ( ) electric current ( ) damping constant ( ) resistance ( ) second, we construct the hamiltonian of the system by using the legendre transformation of the lagrangian. after that, we check the hamiltonian whether it is the total energy of the system or not. the limitation of this research is the series electric circuit without emf (electric source). that is why this research is a simple case. 3 results and discussion in this section, we present our research results and discussion about lagrangian and hamiltonian approaches for the problem. 3.1 lagrangian here we will formulate the lagrangian of the series electric circuit, such as: lc, rl, rc, and rlc circuit. lc circuit to formulate the lagrangian, first, we have to solve the differential equation of kirchhoffs rule for series lc circuit without emf source as follow international journal of applied sciences and smart technologies volume 2, issue 2, pages 169–178 p-issn 2655-8564, e-issn 2685-9432 172 (1) which has a solution for discharging capacitor (2) with is initial capacitance and √ (3) to find the lagrangian of the system, we first look for kinetic and potential energy. we use the analogy in table 1, so the kinetic energy for lc circuit (tlc) is ̇ (4) and the potential energy for lc circuit (vlc) is (5) therefore, the lagrangian (we use l for lagrangian notation to prevent confusion with inductance symbol) is ( ) (6) rl circuit we assume that the initial current that flows in the rl circuit is i0. then, the differ ential equation for the series rl circuit is (7) which has a solution (8) we know that current is the derivative of charge to time which in mechanical quantity analog to velocity ( ). so, the kinetic energy for rl circuit is (9) and we assume that rl circuit does not have potential energy because there is no international journal of applied sciences and smart technologies volume 2, issue 2, pages 169–178 p-issn 2655-8564, e-issn 2685-9432 173 capacitance. therefore, the lagrangian for the rl circuit is (10) rc circuit the differential equation for the series rc circuit with as the initial charge of the capacitor is (11) which has a solution (12) we know that the kinetic energy is and from table 1 mass has analogy as inductance ( ). however, because there is no inductance in the rc circuit, the rc circuit does not have kinetic energy. the potential energy for rc circuit is (13) therefore, the lagrangian for the rc circuit is (14) rlc circuit the differential equation for series rlc circuit is (15) this equation is analog with damped harmonic oscillation. then we define (16) and √ (17) where and as damping factor and natural frequency respectively. so equation (15) can be written as international journal of applied sciences and smart technologies volume 2, issue 2, pages 169–178 p-issn 2655-8564, e-issn 2685-9432 174 (18) which has a general solution ( ) ( ) (19) where √ (20) there are three possibilities: overdamping, critical damping, and underdamping. a) overdamping the overdamping will occur if . equation (19) for the overdamping case is ( ) (21) where is the initial charge in the capacitor. the kinetic energy is ( ) ( ) (22) and the potential energy is ( ) (23) therefore, the lagrangian for the overdamping case is ̅ ( ) [ ( ) ] (24) b) critical damping the condition for critical damping oscillation is λ = 0. then equation (19) for critical damping case is (25) the kinetic energy is (26) and the potential energy is international journal of applied sciences and smart technologies volume 2, issue 2, pages 169–178 p-issn 2655-8564, e-issn 2685-9432 175 (27) so, the lagrangian for the critical damping case is ̅ ( ) (28) c) under damping the under damping will occur if λ < 0. then equation (19) for under damping case is ( ( )) (29) where √ √ (30) we choose and then equation (29) becomes ( ) (31) the kinetic energy is [ ( ) ( )] (32) and the potential energy is ( ) (33) therefore, the lagrangian for under damping case is ̅ [ [ ( ) ( )] ( )] (34) 3.2 hamiltonian in this subsection, we will derive the hamiltonian of each electric circuit. hamiltonian can be formulated by using legendre transformation from lagrangian as follow [10] international journal of applied sciences and smart technologies volume 2, issue 2, pages 169–178 p-issn 2655-8564, e-issn 2685-9432 176 ∑ ̇ ( ̇ ̇ ) (35) where and are the generalized momenta and generalized coordinates respectively. after some calculations, we find the hamiltonian for all electric circuit are as follows: ( ) (36) (37) (38) ( ) [ ( ) ] (39) [ ] (40) [ [ ( ) ( )] ( )] (41) 3.3 discussions this research aims to formulate the hamiltonian extension for the series rlc circuit without emf source. so this research is the simple case because without emf source. if we include the emf source, the equation become the driven case with the emf source become the driven factor [11]. from the results above, we find that the hamiltonian for lc, rl, rc, and rlc circuits (equations (36)-(41)) are the total energy of the systems. that is kinetic energy plus potential energy. we also find that the hamiltonian of the rl circuit is the same as the lagrangian. the hamiltonian of overdamping and critical international journal of applied sciences and smart technologies volume 2, issue 2, pages 169–178 p-issn 2655-8564, e-issn 2685-9432 177 damping is distinguished by a certain factor. for further research, we can quantize the rlc circuit by using the poisson bracket and using these hamiltonians. in addition, this simple case research can be easily followed by students that learn analytical mechanics. in lagrange and hamilton mechanics learning material, the students usually deal with complex mechanics problem. with this research, students know that lagrange and hamilton mechanics are not only used to solve mechanical problems. this also can be used to solve in the case of electricity problems. so this research can be an additional material in analytic mechanics course. 4 conclusion this research aims to formulate the hamiltonian extension for the series rlc circuit without emf source. so this research is the simple case because without emf source. if we include the emf source, the equation become the driven case with the emf source become the driven factor [11]. from the results above, we find that the hamiltonian for lc, rl, rc, and rlc circuits (equations (36)-(41)) are the total energy of the systems. that is kinetic energy plus potential energy. we also find that the hamiltonian of the rl circuit is the same as the lagrangian. the hamiltonian of overdamping and critical damping is distinguished by a certain factor. for further research, we can quantize the rlc circuit by using the poisson bracket and using these hamiltonians. in addition, this simple case research can be easily followed by students that learn an alytical mechanics. in lagrange and hamilton mechanics learning material, the students usually deal with complex mechanics problem. with this research, students know that lagrange and hamilton mechanics are not only used to solve mechanical problems. this also can be used to solve in the case of electricity problems. so this research can be an additional material in analytic mechanics course. acknowledgement the author thanks lppm usd for the research funds. international journal of applied sciences and smart technologies volume 2, issue 2, pages 169–178 p-issn 2655-8564, e-issn 2685-9432 178 references [1] g. r. fowles and g. l. cassiday, analytical mechanics, thomson learning, bel mont, 7th edition, 2005. [2] r. a. serway and j. w. jewett, physics for scientists and engineers with modern physics, thomson learning, belmont, 7th edition, 2008. [3] m. l. boas, mathematical methods in the physical sciences, john wiley & sons, new jersey, 3rd edition, 2006. [4] f. a. buot, “mesoscopic physics and nanoelectronics: nanoscience and nanotechnology.” physics report, 234 (2-3), 73–174, 1993. [5] c. a. u. daz, “discrete-charge quantum circuits and electrical resistance.” physics letters a, 372 (30), 5059–5063, 2008. [6] i. a. pedrosa and a. p. pinheiro, “quantum description of a mesoscopic rlc circuit.” progress of theoretical physics, 125 (6), 1133–1141, 2011. [7] j. r. choi, “quantization of underdamped, critically damped, and overdamped electric circuits with a power source.” international journal of theoretical physics, 41 (10), 1931–1939, 2002. [8] a. h. panuluh and a. damanik, “lagrangian for rlc circuits using analogy with the classical mechanics concepts.” journal of physics: conference series, 909, 012005, 2017. [9] h. essen, “from least action in electrodynamics to magnetomechanical energy – a review.” european journal of physics, 30 (3), 515–539, 2009. [10]a. p. arya, introduction to classical mechanics, prentice–hall, new jersey, 2nd edition, 1998. [11]k. ozdas and m. kilickaya, “novel method for the analysis of rlc circuits.” international journal of electronics theoretical and experimental, 70 (2), 407– 412, 1991. international journal of applied sciences and smart technologies volume 4, issue 1, pages 75–88 p-issn 2655-8564, e-issn 2685-9432 75 this work is licensed under a creative commons attribution 4.0 international license characteristics of wooden furniture drying machine petrus kanisius purwadi1, *, a. prasetyadi1 1department of mechanical engineering, sanata dharma university, paingan, maguwoharjo, depok, sleman, yogyakarta 55282, indonesia *corresponding author: pur@usd.ac.id (received 20-05-2022; revised 27-05-2022; accepted 28-05-2022) abstract this study aims to determine the characteristics of the electric energy drying machine used to dry wooden furniture. furniture made of mahogany. the research was conducted experimentally in the wood furniture industry. the size of the wood drying room is 8 m x 6 m x 3 m. the dryer works with a vapor compression cycle using r134a freon and several fans placed in the drying chamber. the drying process uses a closed air system. the total volume of dried wooden furniture is 12.9 m3. the initial moisture content in the wood ranges from 18% to 23%. wooden furniture is considered dry if the moisture content in the wood furniture is less than 12%. research gives satisfactory results. the performance of the drying machine or cop is 10.85. the drying time for wooden furniture is about 72 hours. keywords: cop, wooden furniture, dryer, vapor compression 1 introduction drying wood furniture in the wood furniture industry with wood fuel still has many shortcomings. apart from being energy-intensive, it is also impractical, unsafe, uncomfortable, environmentally unfriendly and takes a long time. economically, the international journal of applied sciences and smart technologies volume 4, issue 1, pages 75–88 p-issn 2655-8564, e-issn 2685-9432 76 drying process with wood fuel requires a large amount of money. drying wood furniture referred to here is the process of drying wooden furniture that only relies on high temperature air to dry all wooden furniture. in general, the air temperature in the wood furniture drying room is not more than 70oc. if the air temperature of the drying chamber is more than that, the air will damage wooden furniture and flammable wood. the air used in the drying process of wood furniture is air heated by a heat exchanger which has a much higher working temperature. by means of air passed through a heat exchanger, then the hot air produced is used to dry wooden furniture. in the heat exchanger flows a mixed gas (air and gas resulting from the fuel combustion process) which has a much higher temperature. the use of fuel with wood wastes energy because the mixed gas resulting from the fuel combustion process that is discharged into the environment still has a high temperature. as is known, high-temperature exhaust gases have high energy as well. the process of drying with wood fuel is quite a hassle. it's a hassle, because the industry must provide wood fuel every time it will carry out the process of drying wood furniture, prepare for the combustion process, burn wood fuel, maintain and oversee the drying process of wood furniture from start to finish. based on information from workers in the wood furniture industry, drying wood furniture takes more than 7 days. the drying process with wood fuel is considered unsafe. if the drying process is not maintained and controlled properly, the possibility of a fire is quite large. the process of drying wooden furniture also does not provide comfort. not comfortable, because after the drying process is finished, the air in the drying room still has a high temperature (about 35oc). actually, the industry can still reduce the air condition to close to the outside air condition, but it takes a long time to get the same air temperature as the outside air temperature. the industry can't wait to immediately take out and import wooden furniture with the air condition in the drying chamber still quite high. this condition makes workers uncomfortable when loading and removing wooden furniture from the drying room. the drying process with wood fuel is not environmentally friendly. not environmentally friendly because burning wood fuel produces dust, and exhaust gases that pollute the environment. it also produces soot that makes the walls of the drying room for wood furniture black. the drying time of wooden furniture is also relatively long. the duration international journal of applied sciences and smart technologies volume 4, issue 1, pages 75–88 p-issn 2655-8564, e-issn 2685-9432 77 depends on the volume of the wood being dried, the thickness of the wood, the type of wood, the type of furniture, and so on. many researches and applications related to the drying process have been carried out by experts [1-14] although with different objects being dried and the drying system chosen is also different. research and application with fish objects has been carried out by a. aryadillah and a. mursadin [1], m. nur alam, et al, [2]. research with briquette objects was carried out by p.k. purwadi, et al [3]. research with the object of corn chips has been carried out by d. purwadianto [4]. research with the object of towels has been carried out by k. wijaya [5], and clothes by p.k. purwadi and w. kusbandono [6,7,8,9], balioglu [10], t. mitsunori [11], bison [12]. research and application with wooden planks has been carried out by w. kusbandono and p.k. purwadi [13], purwadi, p.k., et al 14]. the research method used is a simulation and some is experimental. apart from aiming to get good performance or better efficiency of the dryer, there is also a goal to get a dryer that can replace the drying process by drying directly under the hot sun. the dryer can work during the rainy season or can be done at night or can be done whenever needed. from the results of several studies that have been carried out, some researchers use a drying machine that works with a vapor compression cycle. drying machines that work with a vapor compression cycle, in addition to providing good performance, also provide a fast-drying process. the dryer is made and used in drying wood furniture, working with a vapor compression cycle. with the vapor compression cycle, air that is dry and quite hot can be produced. dry air means that the water content in the air is quite low, with the resulting relative humidity (rh) of less than 35%. the air produced is quite hot, which causes the air condition in the drying chamber to be no more than 35oc. a. vapor compression cycle the main components of a vapor compression machine include: compressor, condenser, capillary tube and evaporator. figure 1 presents a series of main components of a steam compression engine and figure 2 presents the vapor compression cycle on the p-h diagram, which is accompanied by a further cooling and further heating process. there is no need for superheating and subcooling. the further cooling process is carried out so that the condition of the refrigerant when it enters the capillary tube is actually in international journal of applied sciences and smart technologies volume 4, issue 1, pages 75–88 p-issn 2655-8564, e-issn 2685-9432 78 the liquid phase. the further heating process is used so that when the refrigerant enters the compressor, it is actually in the gas phase. both processes are used to increase the performance of the refrigeration engine and facilitate the function of the compressor to circulate the refrigerant flowing in the vapor compression cycle system. compressor work becomes light and not easily damaged. figure 1. vapor compression cycle component circuit figure 2. vapor compression cycle on p-h diagram the amount of heat absorbed per unit mass-refrigerant by the evaporator (qin) can be calculated by equation (1) 𝑄𝑖𝑛 = ℎ1 − ℎ4 (kj/kg) (1) the amount of heat released per unit mass-refrigerant by the condenser (quot) can be calculated by equation (2) 𝑄𝑜𝑢𝑡 = ℎ2 − ℎ3 (kj/kg) (2) compressor work per unit refrigerant-mass (win) can be calculated by equation (3). 𝑊𝑖𝑛 = ℎ2 − ℎ1 (kj/kg) (3) the performance of the steam compression cycle engine can be expressed by equation (4). cop is the ratio between the amount of useful energy and the energy supplied to the dryer cop = (qin + qout)/win (4) the variables h1, h2, h3 and h4 in equation (1) to (3), are the enthalpy of refrigerant entering the compressor, enthalpy of refrigerant leaving the compressor, enthalpy of refrigerant leaving the condenser and enthalpy of refrigerant entering the evaporator, respectively. the enthalpy that enters the evaporator is equal to the amount of enthalpy international journal of applied sciences and smart technologies volume 4, issue 1, pages 75–88 p-issn 2655-8564, e-issn 2685-9432 79 that leaves the condenser or enters the capillary tube. this is because the process that occurs in the capillary tube takes place with a constant enthalpy value b. wood furniture drying process the medium used for drying wood furniture is air. the air system used is a closed air system. during the wood furniture drying process, no outside air is entered or removed from the building which is used for the wood furniture drying process. the building for the wood drying process consists of a wood furniture drying room and a drying machine room. the thermodynamic processes experienced by air during the wood drying process include the following processes: cooling, cooling and dehumidifying, heating and cooling and humidifying. when the air passes through the evaporator, the air first undergoes a cooling process, then the air undergoes a cooling and dehumidifying process. after the air leaves the evaporator, the air undergoes a heating process as it passes through the condenser. when air is blown into the drying chamber by the condenser fan the air passes through the wooden furniture to be dried. in this process the air undergoes a cooling and humidifying process. inside the drying room, several fans are installed. one of the functions of the fan is so that the cooling and humidifying process can run optimally. figure. 3 presents a schematic of the wood furniture drying process carried out in this study. figure 3. schematic of wood furniture drying machine with closed air system international journal of applied sciences and smart technologies volume 4, issue 1, pages 75–88 p-issn 2655-8564, e-issn 2685-9432 80 2 research methodology the research was conducted experimentally in the wood furniture industry, with a drying room size of 8m x 6m x 3m. the dryer works with a source of electrical energy. the drying machine works by using a vapor compression cycle, using a refrigerant r134a working fluid. the main components of the drying machine include: compressor, evaporator, condenser and capillary tube. the dried object is wooden furniture. figure 4 shows some examples of dried wood furniture. figure 4. some examples of wood furniture being dried in a drying room in the process of drying wood furniture, the evaporator serves to get dry air, while the condenser functions to get hot enough air. when the dryer works, the air used to dry the wooden furniture is passed through the evaporator and condenser before being used to dry the wood furniture. the total electric power required to drive the compressor is about 3 hp, and the total electric power to drive the evaporator fan and condenser fan is about 80 watts. in the wooden furniture drying room, 6 fans are provided with a uniform fan power of @ 250 watts. in addition to functioning to accelerate air flow, the fan also functions so that all wooden furniture flows through the air. the dried wood furniture has a volume of 12.9 m3, with various forms of wood furniture. mahogany wood furniture. the initial condition before the drying process was carried out, the moisture content in the wood furniture was in the range of 18-23%. the drying process is stopped when all wooden furniture has a moisture content of less than 12% or below 12%. the process of international journal of applied sciences and smart technologies volume 4, issue 1, pages 75–88 p-issn 2655-8564, e-issn 2685-9432 81 drying wood furniture is carried out with a drying machine that works continuously without stopping. 3 results and discussion the results of the research in the form of data are presented in tables 1 to 4. apart from the direct measurement results, data are also taken from the p-h diagram and tables of the refrigerant properties used (such as: enthalpy, condenser working temperature, evaporator working temperature). the data collection from this research was carried out with several assumptions, such as: a. the compression process in the compressor that occurs in the vapor compression cycle takes place isentropically b. the process of decreasing refrigerant pressure in the capillary tubes that occurs in the vapor compression cycle takes place isentalpically c. the process of desuperheating and condensation or condensation of refrigerant in the condenser that occurs in the vapor compression cycle takes place at a constant pressure (p2) d. the refrigerant evaporation process in the evaporator that occurs in the vapor compression cycle takes place at a constant pressure (p1). e. the superheating and subcooling processes are ignored from table 3, it can be seen the characteristics of the dryer used for drying wood furniture in the industry where this research was conducted. the amount of heat absorbed by the evaporator per unit mass of refrigerant from the air is 126.43 kj/kg. the heat absorbed by the evaporator causes the air to decrease in dry bulb air temperature and undergo a process of condensation of water vapor in the air. in other words, the air undergoes a cooling and cooling and dehumidifying process. the process of cooling and dehumidifying the air takes place at a relative humidity (rh) of 100%. the air becomes dry, because the water content in the air decreases. the value of the specific humidity of the air decreases. the amount of heat released by the condenser per unit mass of refrigerant into the air is 152.11 kj/kg. this heat causes the air temperature to rise again, and the relative humidity of the air (rh) decreases by 19%. with dry air conditions, and relatively low humidity, the drying process of wooden furniture can be carried out international journal of applied sciences and smart technologies volume 4, issue 1, pages 75–88 p-issn 2655-8564, e-issn 2685-9432 82 successfully. in this drying process, the compressor work per unit mass of refrigerant required is 25.68 kj/kg and the resulting drying machine performance is 10.85. table 1. working pressure and temperature on vapor compression cycle machine researched machine evaporator working pressure (p1) working pressure condenser (p2) evaporator working temperature (tevap) condenser working temperature (tkond) (kpa) (kpa) (°c) (°c) vapor compression cycle machine of wood furniture dryer 414.61 1455.50 10 54 table 2. enthalpy values in the vapor compression cycle researched machine h1 h2 h3 h4 (kj/kg) (kj/kg) (kj/kg) (kj/kg) vapor compression cycle machine of wood furniture dryer 404.32 430.00 277.89 277.89 table 3. characteristics of the vapor compression cycle machine of the dryer researched machine qin qout win cop= (qi+qout)/ (win) (kj/kg) (kj/kg) (kj/kg) vapor compression cycle machine of wood furniture dryer 126.43 152.11 25.68 10.85 table 4. air conditions in the drying process of wood furniture researched machine the condition of the air entering through the evaporator air temperature out of the evaporator condenser exit air temperature drying time tdb,a rh tdb,c=twb,c tdb,d rh t vapor compression cycle machine of wood furniture dryer 30oc 68% 16oc 44 19% 72 hours the air conditions in the air-drying process are not only presented in table 4, but also presented in figure 5. the a-b-c-d-a process in figure 5 presents the air-drying process. the a-b and b-c processes in figure 5 occur when the air passes through the evaporator, the c-d process occurs when the air passes through the condenser, and the d-a process international journal of applied sciences and smart technologies volume 4, issue 1, pages 75–88 p-issn 2655-8564, e-issn 2685-9432 83 when the air passes through the wooden furniture. the a-b process is the air-cooling process, the b-c process is the cooling and dehumidifying process, the c-d process is the heating process and the d-a process is the cooling and humidifying process. the cooling and humidifying process is assumed to run at a constant enthalpy value, so the process is also known as evaporative cooling. the evaporative cooling process experienced by the air, when the air carries out the process of drying wooden furniture. the evaporative cooling process takes place at a constant wet bulb air temperature. figure 5. air conditions in the drying process of wood furniture in the a-b process, the air undergoes a cooling process. the dry bulb air temperature decreases in temperature until it reaches the dew point temperature of the water vapor in the air (point b in figure 5). the air releases heat, and the enthalpy of the air decreases. the heat released by the air is used to vaporize or boil the refrigerant flowing in the evaporator pipe. in the a-b process, the process runs at a fixed specific humidity value, because there is no addition or reduction of water content in the air. the relative humidity of the air increases, as the dry bulb air temperature decreases. in the a-b process, the dry bulb air temperature and the wet bulb air temperature decreased until they reached the dew point temperature of the water vapor in the air (point b in figure 5). the dry-bulb international journal of applied sciences and smart technologies volume 4, issue 1, pages 75–88 p-issn 2655-8564, e-issn 2685-9432 84 air temperature at point b is equal to the wet-bulb air temperature and the same as the dew point temperature of the water vapor in the air. in the b-c process, the air is cooling and dehumidifying. in addition to the decrease in air temperature, dry bulb air also experienced a decrease in specific humidity. in this process the air releases heat, which causes the enthalpy of the air to decrease. the heat released by the air is used for the evaporation or boiling process of the refrigerant flowing in the evaporator pipe. as is known, the evaporator sucks heat from the air that passes through the evaporator pipe to be used to change the refrigerant phase from liquid to gas. the specific humidity of the air decreases because in this process there is a reduction in the water content in the air. the decrease in water content is caused because some of the water vapor in the air condenses. the b-c process runs at 100% relative humidity. during the process, the dry bulb air temperature is the same as the wet bulb air temperature. in the c-d process, the air undergoes a heating process. the air temperature has increased in temperature, both dry bulb air temperature and wet bulb air temperature. this is because the air gets the heat released by the condenser. as is known, in the vapor compression cycle, the condenser releases heat to the air passing through the condenser. the heat released by the condenser comes from the desuperheating process and the refrigerant condensation process when the refrigerant flows in the condenser pipe. in this process the enthalpy of air increases. the process runs at a constant specific humidity, because there is no additional water content in the air, but the relative humidity of the air decreases as the air temperature increases. in the d-a process, the air undergoes an evaporative cooling process. the air undergoes a cooling process and the addition of specific humidity. the dry-bulb air temperature decreases, but the wet-bulb air temperature remains the same. this is because some of the energy in the air is sucked in by the wooden furniture to evaporate some of the water in the wood furniture. as it is known that the evaporation process requires latent heat. with the process of evaporation of water in wooden furniture, it causes the water content in the wood to decrease. over time, the wood will dry out. in this process, water is transferred from the wood to the air. this causes the water content per unit mass of air to increase, which causes the specific humidity of the air to increase. international journal of applied sciences and smart technologies volume 4, issue 1, pages 75–88 p-issn 2655-8564, e-issn 2685-9432 85 unlike drying wood furniture using wood fuel, drying wood furniture with a steam compression cycle is not energy-intensive. the electrical energy used to drive the compressor and drive the fans are all useful for the drying process. relatively no energy is wasted. the energy absorbed by the evaporator causes the air to dry out and the energy released by the condenser causes the air to increase in temperature. the resulting air condition provides benefits for the drying process of wood furniture. the electrical energy required by the condenser fan and evaporator fan is also useful in increasing the air flow rate required for the drying process. likewise, the electrical energy used to drive the fans in the drying room. even if there is a leakage of electrical energy that turns into heat, as happens in compressors or in electric motors from fans, then this heat energy will not go to waste. in the process of drying wood furniture, the air used also passes through the compressor and passes through the electric motor of the fans. in other words, the heat from the leak will be accepted by the air and cause an increase in the temperature of the air. the higher the air temperature, the faster the drying process. as is known, the position of the compressor and fans are placed in the building room. because the system used in the drying process for wood furniture is not energy-intensive, it is only natural that the drying machine has a high performance (cop of the dryer is 10.85). one of the characteristics of this wood furniture dryer with a vapor compression cycle is that it is practical. practical because it does not bother the user. to run the dryer, the user just presses on from the on-off button which causes the compressor and fans to work. if you want to turn it off, just press the off button. users are not bothered by having to maintain and control the drying process of wooden furniture. during the drying process of wooden furniture, users can leave the drying process area, because they are not required to be in the drying location. the process of drying wood furniture with a vapor compression cycle can run safely and comfortably. small chance of fire. the air condition in the drying chamber does not allow a fire to occur. there is no fire and no embers that allow a fire to occur. unlike when using wood fuel, in the presence of fire, the temperature produced in the fuel combustion process can exceed 200oc. by using a machine that works with a vapor compression cycle, there is no air condition that has a temperature above 50oc. the air temperature in the wood furniture drying room is safe, the air temperature is only around international journal of applied sciences and smart technologies volume 4, issue 1, pages 75–88 p-issn 2655-8564, e-issn 2685-9432 86 30oc. with such air conditions, workers can put wooden furniture in the drying room or take out wooden furniture, at any time. workers do not feel hot, because the air temperature is not much different from the outside air temperature. more comfortable than the air condition when using the drying process with wood fuel. the air condition in the drying room is also relatively clean. no soot is produced from the drying process with this system. drying using a vapor compression system is relatively more environmentally friendly. does not cause environmental pollution, both air pollution and noise pollution. the refrigerant used in the vapor compression cycle is selected which is environmentally friendly. does not cause social problems for the surrounding community. no exhaust gas is produced from the process of using this electrical energy. all of the electrical energy used in this drying process is converted into work and heat. with the high performance of the machine, the drying system used is able to carry out the drying process relatively quickly. to get the moisture content of all wooden furniture below 12%, the time needed is only about 76 hours or about 3 days. with a note, the machine works continuously without stopping. this can be done, because the drying process can be left behind. the drying process can be carried out continuously during the day and night. with fast drying time, the production capacity of wood furniture can be increased 4 conclusion research on wood furniture drying machine using the vapor compression cycle has been successful. the process of drying wooden furniture can run as expected: it does not waste energy, is practical, safe, comfortable, environmentally friendly, and quickly dries wooden furniture. the dryer has a high performance, with a cop of 10.85. the drying process for wooden furniture takes about 72 hours (3 days). acknowledgements the author, petrus kanisius purwadi as a grantee, expresses his gratitude for the support and grants provided by lppm sanata dharma university yogyakarta so that this international journal of applied sciences and smart technologies volume 4, issue 1, pages 75–88 p-issn 2655-8564, e-issn 2685-9432 87 research can run well and be completed. research contract number with lppm usd is no: 007/penel./lppm-usd/ii/2022. the author also does not forget to thank the owner of the wood furniture industry in sragen and the staff where this research was conducted. references [1] a. aryadillah, a. mursadin, “analisis perbandingan kinerja sistem distribusi panas pada variasi ruang mesin pengering ikan”, sjme kinematika, 1(1), 27-36, 2016. [2] m. nur alam, sukarti, f.i. lisanty, “penerapan teknologi alat pengering ikan bagi kelompok pengusaha ikan teri kering di kecamatan ponrang kabupaten luwu”, prosiding seminar nasional hasil pengabdian kepada masyarakat (snp2m), 225-229, 2018. [3] p.k. purwadi, y.b. lukiyanto, s. mungkasi, “mengembangkan industri briket dengan mempergunakan mesin pengering briket energi listrik”, abdimas altruis jurnal pengabdian kepada masyarakat, 1(2), 52-61, 2018. [4] d. purwadianto, p.k. purwadi, “karakteristik mesin pengering emping jagung energi listrik”, prosiding seminar nasional universitas respati yogyakarta, 1(2), 116-123, 2019. [5] k. wijaya, p.k. purwadi, “mesin pengering handuk dengan energi listrik”, majalah ilmiah teknik mesin mekanika, 15(2), 31-35, 2016. [6] p.k. purwadi, “mesin pengering kapasitas limapuluh baju sistem tertutup”, jurnal ilmiah widya teknik, 16(2), 91-96, 2017 [7] p.k. purwadi, w. kusbandono, “mesin pengering pakaian energi listrik dengan mempergunakan siklus kompresi uap”, proceeding seminar nasional tahunan teknik mesin xiv (snttm xiv), mt 61, 2015 [8] p.k. purwadi, w. kusbandono, “pengaruh kipas terhadap waktu dan laju pengeringan mesin pengering pakaian”, teknoin jurnal teknologi industri, 22(7), 514-523, 2016 [9] p.k. purwadi, w. kusbandono, “inovasi mesin pengering pakaian yang praktis, aman dan ramah lingkungan”, jurnal ilmiah widya teknik, 15(2), 106-111, 2016. [10] balioglu, “heat pump laundry dryer machine”, patent application publication, pub. no: us 2013/0047456 a1, 2013 [11] t. mitsunori, “dehumidifying and heating apparatus and clothes drying machine using the same”, european patent specification, ep 2 468 948 b1, 27.11.2013, 2013. http://jurnal.poliupg.ac.id/index.php/snp2m/issue/view/50 http://jurnal.poliupg.ac.id/index.php/snp2m/issue/view/50 http://jurnal.poliupg.ac.id/index.php/snp2m/issue/view/50 international journal of applied sciences and smart technologies volume 4, issue 1, pages 75–88 p-issn 2655-8564, e-issn 2685-9432 88 [12] bison, “heat pump laundry dryer and a method for operating a heat pump laundry dryer”, patent application publication, pub. no: us 2012/0210597 a1, 2012. [13] w. kusbandono, p.k. purwadi, “effects of the existence of fan in the wood drying room and the performance of the electric energy wood dryer”, international journal of applied sciences and smart technologies, 3(1), 83–92, 2021. [14] p.k. purwadi, s. mungkasi, y.b. lukiyanto, “peningkatan pemahaman proses pengeringan kayu di smk pangudi luhur muntilan”, jurnal abdimas dewantara, 3(2), 16-29, 2020. international journal of applied sciences and smart technologies international journal of applied sciences and smart technologies volume 4, issue 1, pages 21-34 p-issn 2655-8564, e-issn 2685-9432 21 impact of online education and sentiment analysis from twitter data using topic modeling algorithms sulochana devi1, *, chhaya dhavale1, lalita moharkar2, sushama khanvilkar3 1 department of information technology, xavier institute of engineering, mumbai, india 2 department of extc, xavier institute of engineering, mumbai, india 3 department of computer engineering, xavier institute of engineering, mumbai, india *corresponding author: sulochana.d@xavier.ac.in (received 01-05-2022; revised 16-05-2022; accepted 21-05-2022) abstract during a pandemic, all industries suffer greatly, and every sector of the world suffers in some way, including the education sector. internet expressions reflect users' feelings about a product or service. the polarity of information in source data toward a subject under investigation is determined by sentiment analysis processes. the goal of this study is to examine social media expressions about online teaching and learning, as online education will become a part of everyday life in the future. we collected data from twitter using keywords related to online education and google form from engineering undergraduate students for prototype implementation. this analysis will assist teachers, parents, and the student community in understanding the benefits and drawbacks of the education industry, allowing for further improvement in educational outcomes. we used aspect-based sentiment analysis and topic modeling to determine this work is licensed under a creative commons attribution 4.0 international license http://creativecommons.org/licenses/by/4.0/ international journal of applied sciences and smart technologies volume 4, issue 1, pages 21-34 p-issn 2655-8564, e-issn 2685-9432 22 sentiment polarity and important topics for education sector stakeholders. to begin, we used textblob python package to determine sentiment polarity, and bag of words, lda and lsa model for discovering topics. after modeling topics from the collected data, topic coherence is used to assess the degree of semantic similarity between high scoring words in the topic. the word cloud and ldavis are used to visualize data. the experimental results are promising and it will assist education stakeholders in addressing the concerns that have been identified as social media expressions to work on. since the boom in science and technology, humans have been trying to invent machines that could reduce their efforts in day to day activities. in this paper, we develop a personal assistant robot that could pick up objects and return it to the user. the robot is controlled using an android application in mobile phones. the robot can listen to user’s command and then respond in the best way possible. the user can command the robot to move to given location, capture images and pick objects. the robot is equipped with ultrasonic sensor and web camera that helps it to move to different location effectively. it is also equipped with sleds that play important role in object picking process. the robot uses a tiny yolov3 model which is rigorously trained on several images of the object. there are some possible improvements that can be achieved which could help this robot to be used in several other fields as well. keywords: coherence, education, lda, lsa, pandemic, topic modeling, twitter 1 introduction the global spread of covid-19 altered humans daily routines of living, working, and operating in addition to their social connections. like all other parts, the education sector grave implications related to students, instructors, and institutions across the international journal of applied sciences and smart technologies volume 4, issue 1, pages 21-34 p-issn 2655-8564, e-issn 2685-9432 23 world. academic institutes were closed for formal offline face-to-face education to virtual transformation and the unprecedented upward thrust of on-line learning in the scenario of world covid-19 lockdowns. online learning, also known as e-learning, is learning via internet-enabled devices such as mobile phones, desktop computers, laptop computers, tablets, and so on. [1]. within days, education was transitioned from face-toface to online mode. all the stakeholders quickly adopted and continued with the new normal. this massive unplanned shift from traditional face-to-face learning to an online learning framework has altered the traditional ways in which educational institutions deliver knowledge to their students. because of the abrupt transition, both teachers and students faced numerous challenges. with video lectures and online exams, students were introduced to virtual textbooks and modules. because of limited nonverbal communication, unstable internet access/technology, expensive equipment and devices, and a lack of technical knowledge, educators and learners faced numerous challenges in digital learning. despite the challenges, online education provided many benefits such as continuity of studies, overall flexibility, increased information retention, extended reach of teaching, securing good grades, attendance, increased technical literacy, social interaction, accessibility, pace of learning, and no time and place restrictions. adoption of online learning will continue post-pandemic, and this shift will have an impact on the global education sector. a new hybrid educational model will emerge, with significant benefits. the integration of information technology in education will be accelerated, and online education will eventually become a required component of school education. traditional offline learning and e-learning can coexist. given the possibility of a hybrid model in which online learning will play an important role, there is a significant need to understand social media users' and learners' attitudes toward online learning in order to make online learning more effective. this study examines students' perspectives on the positive and negative aspects of online learning, as well as the sentiments of twitter users, who include students, teachers, and other stakeholders. during the pandemic, tweets from various education stakeholders such as teachers, students, parents, and other entities will cover the international journal of applied sciences and smart technologies volume 4, issue 1, pages 21-34 p-issn 2655-8564, e-issn 2685-9432 24 majority of the aspects of online learning. data collected from students via google forms include their perspectives on the abrupt shift from offline to online education, its impact on them, challenges encountered during the adaptation process, and their future expectations for online learning. the first tweets of education sector stakeholders were extracted using the snscrape scraper using various education-related keywords such as online education, e-learning, pandemic, and covid19. after preprocessing, sentiment analysis is done, as sentiments can be divided into positive, negative, and neutral forms, sentiment analysis processes identify the polarity of information in the source materials toward an entity. we applied the lda model to discover topics from social media users’ tweet data and students’ opinions. the ldavis tool was used for data visualization. lsa is applied to efficiently analyze the text and find the hidden topics of concerns related to education during a pandemic by understanding the context of the text. bag of words technique is used to extract top words of concern for social media users. same lda, lsa and bow techniques applied for data collected from students. [24] . 2 research methodology background work so many researchers used twitter data for analysis. the reason could be that data can be extracted with api easily and it is one of the popular social media platforms. the collections of tweets contain useful information. anyone can see the most recent expressions or complaints about any popular entity by combing through all of the tweets. the results of the analysis can be used to improve education performance, such as teaching and learning. table 1 summarizes topic modeling algorithms and its applications used by various researchers. [1-3, 21,23]. international journal of applied sciences and smart technologies volume 4, issue 1, pages 21-34 p-issn 2655-8564, e-issn 2685-9432 25 table 1. topic modeling and sentiment analysis review ref no of papers dataset source approach aim/objective remark 6 90,000 tweets from study area. naive-bayes classifier sentiment analysis of tweets on education during covid-19 topic modeling not done only sentiment analysis 7 1717 tweets from twitter web analytics approach find sentiment on educational posts no ml 8 online survey taken the percentages were calculated based on the frequency of common student responses. to know the effectiveness of online learning challenges and obstacles in online education at pakistan students' perspectives 9 data is collected with google forms nlp techniques and logistic regression classifier sentiment analysis on covid-19 epidemic’s education no topic modeling 10 short data from facebook topic modeling algorithm lsa, lda, nmf, and pca analysis of topic modeling techniques all topic modeling methods reviewed. 11 facebook and twitter topic modeling – lda, ldah, in the insurance domain. sentiment analysis as well done. topic-aware sentiment analysis to improves, communication with customers and a better sense of the market ml algorithms are used for text classification. 12 13,967 tweet of surabaya citizen topic modeling with lda & lsa data requirement from community to support service policy, government made surbaya media center. 13 1740 (neural information processing systems) topic modeling with lda & lsa topic modeling to understand the various topics in fields of ml topic modeling used for detecting semantic structures in a set of research papers. international journal of applied sciences and smart technologies volume 4, issue 1, pages 21-34 p-issn 2655-8564, e-issn 2685-9432 26 3 materials and methods this section presents visualization and description of used datasets, description of sentiment analysis process, and the proposed methodology for performing topic modeling and sentiment analysis on the selected dataset. dataset description two different types of data are used for this study. first dataset used for this study has been collected from twitter through snscrape scraper using various keywords related to online education like ‘onlineeducation’, ‘e-learning’, ‘pandemic’, ‘covid19’, ‘onlineclasses’, ‘educationincovid’ etc. and contains 6000 records. second dataset has been collected from 150 students using a google form to know their perspective about online learning, advantages, and disadvantages of online learning, difficulties and challenges faced during online learning. table 2 presents sample tweets from the dataset with username, date and tweet content and table 3 presents sample opinions from students’ data. table 2. tweet sample from the collected dataset. username date tweet education blog 2020-10-17 14:43:06+00:00 #education: as the #covid19 #pandemic lingers, the impact on #highereducation is becoming clearer: #college closures, academic program terminations and institutional mergers are occurring at a pace seldom, if ever, seen before. paul wusow 2020-10-16 20:23:04+00:00 the covid-19 pandemic has changed education forever. this is how via @wef: https://t.co/owfvjyqzev #covid19 #pandemic #education education blog 2020-10-15 15:16:23+00:00 #education: the #covid19 #pandemic is offering us an opportunity to rethink #accountability in education. https://t.co/jhtq50jrbt #schools #students #children #teaching panafrica nuk 2020-10-07 15:01:54+00:00 teachers have had to move from a space in which they have years of experience to the unknown and challenging world of online, remote, correspondence and socially distanced teaching. read more https://t.co/xhfuhrrxmh #africa #africancaribbean #covid19 #education #pandemic https://t.co/jhtq50jrbt international journal of applied sciences and smart technologies volume 4, issue 1, pages 21-34 p-issn 2655-8564, e-issn 2685-9432 27 table 3. sample opinions from students’ dataset. sr. no. students’ opinion 1 positive: no travel, comfortable environment, flexible negative: no proper timeline, no proper instruments for practical for certain subjects. 2 positive: timings are flexible and we are doing it from the comfort of our homes, but the negative is the absence of social interaction and lack of routine' 3 because of online education we get an opportunity to learn many new courses online and avoid the expenses of traveling. after collecting both datasets, preprocessing is done to clean the dataset and remove unessential information. after that using textblob python package polarity score of tweets is obtained. tweets are categorized into three polarities: positive, negative, and neutral. methodology this subsection explains different phases of the methodology followed and the approaches used in each phase. aspect based sentiment analysis and topic modeling techniques were applied on both datasets as shown in the given figure 1. figure 1. proposed work the sequential workflow of the methodology applied on both datasets along with steps and methods is represented in the figure 2. workflow starts with the input phase of education related tweets /social media expressions about online education students' perspective about online education topic modeling ---------aspect based sentiment analysis international journal of applied sciences and smart technologies volume 4, issue 1, pages 21-34 p-issn 2655-8564, e-issn 2685-9432 28 data collection from students and tweet scraping from twitter using snscrap. in the next phase data is preprocessed to remove unnecessary and repeated words. after this phase sentiment analysis and topic modeling techniques are applied to obtain the required output. figure 2. processing flow preprocessing of data before starting data analysis process, data is preprocessed to remove non required information, this helps to increase efficiency of model and gives better accuracy. so, first step is to preprocess the data before starting encoding [3]. python’s nlp toolkit is used for preprocessing of data for this study. this is achieved by converting text into lower case, deleting urls, removing hyperlinks and html tags, applying stemming, lemmatizing and finally removing stop words. lowercase conversion: changing the text to lowercase helps to decrease the complexity of the data as, ‘data’ and ‘data’ are considered as different by machine learning models, so by changing all data into lower case, both words ‘data’ and ‘data’ are considered as ‘data’. considering lowerand upper-case words as dissimilar words affects the training and classification process. elimination of urls, punctuation marks, hyperlinks, html tags, and numbers: this type of data do not provide any additional meaning for learning models, so they do not contribute in enhancement of classification performance, also they and escalate the intricacy of feature set, so deleting them helps to bring down the feature space. • education related tweets /social media expressions about online education • students' perspective about online education input • topic modeling • lda • lsa • aspect based sentiment analysis processing • sentiment polarity • topics of concern / important topics output international journal of applied sciences and smart technologies volume 4, issue 1, pages 21-34 p-issn 2655-8564, e-issn 2685-9432 29 lemmatizing and stemming: lemmatization and stemming is done to decrease inflectional forms and sometimes different forms of corelated words to a common base word form [4]. for example, ‘swims, ‘swimming’, and ‘swam’ are changed to the base word ‘swim’. elimination of stop words: stop words do not provide any useful information for analysis. stop words such as ‘yours’, ‘is, ‘the’, ‘a’, ‘am’ and ’an’ are removed [5]. 4 results and discussion textblob textblob is a python library used for various nlp tasks such as sentiment analysis, part-of-speech tagging, paraphrase, noun phrase extraction, and sorting, etc. [14]. in our study, it is used for sentiment analyzing by providing polarity score between -1 and 1 for tweets. tweets are assigned polarity based on polarity score, tweets having polarity score less than zero is considered as -ve, having score equal to zero is considered as neutral tweet, and having score greater that 0 will be considered as +ve tweet [15]. table 4 presents polarity percentage of both datasets. table 4. polarity percentage positive negative neutral dataset 1 38% 16% 46% dataset 2 63% 22% 15% feature selection bag of words (bow) and tf-idf are the most widely used methods for feature extraction. bag of words: bow is a commonly used technique in nlp and information retrieval to extract features from preprocessed text or data [16]. bow is used to count the appearance of a word in a text and forms a feature vector comprising the number of appearances of each unique word for text classification. the bow is generally used to create the vocabulary of all unmatched words and train the learning models through their frequencies. as an output of bow, top words for dataset1 are: capacity, coffee, covid, education, experiences and for dataset 2 are: beginning, class, college, depressing, difficult etc. international journal of applied sciences and smart technologies volume 4, issue 1, pages 21-34 p-issn 2655-8564, e-issn 2685-9432 30 term frequency-inverse document frequency: tf-idf is used for feature extraction by extracting weighted features from text data. it gives the weight of each term in the corpus to enhance the performance of learning models [17]. tf-idf score for search keywords such as education, covid, pandemic is 0.028766. topic modeling for machine learning and natural language processing, topic modeling algorithm is used to scan large document, extract and phrase hidden patterns. increased popularity of social media platforms makes them lucrative for researchers to extract ideas from here. tweets contain unorganized short text topics, so it is required to uncover topics from tweet data through topic modeling. in this paper, we have used lda (latent dirichlet allocation) and lsa (latent semantic analysis) methods. lda is an unsupervised generative probabilistic model of a corpus lda gives topics using word probabilities. lad has two parts, the words within documents, and probability of words is calculated related to a topic [23]. lda is a well-known method for topic modeling. first introduced by david et al. in [22]. lsa is used to find out relation between documents and expressions. performance of lsa is good in short sentence classification and it is demonstrated in various research works [20, 21]. sample output obtained from lda is given in figure 3 & 4. topic coherence is used to calculate the score of a single topic by calculating the degree of semantic similarity between high scoring words in the topic. these calculations help to discriminate topics that are semantically interpretable topics and topics that are artifacts of statistical inference. there are different coherence measures like c_v, c_p, c_uci, c_umass. we have used c_v and c_umass as given in the table 5. table 5. coherence score coherence score → using c_v using umass dataset 1 0.41852 -6.95928 dataset 2 0.33977 -2.48233 international journal of applied sciences and smart technologies volume 4, issue 1, pages 21-34 p-issn 2655-8564, e-issn 2685-9432 31 figure 3. top 30 most silent words in topic 1 twitter data figure 4. top 30 most silent words in twitter data international journal of applied sciences and smart technologies volume 4, issue 1, pages 21-34 p-issn 2655-8564, e-issn 2685-9432 32 5 conclusion this study examines a topic related to online education during the corona period. polarity is calculated and analyzed as positive, negative, and neutral for the same dataset. two datasets are created: one for twitter data and its analysis, and the other data collected from students to understand the impact of online education on the student community. the lda and lsa algorithms have been used successfully for topic modeling. lsa is a method for forming semantic generalizations from textual sections that uses singular value decomposition (svd), whereas lda is an unsupervised machine learning algorithm. model-generated topics are not always easy to interpret. topic coherence calculations are thus used to differentiate between good and bad topics. the outcomes / topics can then be used to overcome challenges in the education industry, as well as to consider online education as an opportunity for the needy. we discovered clear polarity/understanding in the student dataset and mixed expressions in the social media community. acknowledgements authors are highly grateful to twitter for providing a platform for users to express their views, and thankful to xavier institute of engineering, mahim mumbai. references [1] zhu, x., & liu, j. education in and after covid-19: immediate responses and long-term visions, postdigital science and education, 2(3), 695–699, 2020. [2] mujahid m, lee e, rustam f, washington pb, ullah s, reshi aa, ashraf i. sentiment analysis and topic modeling on tweets about online education during covid-19, applied sciences, 11(18), 8438. 2021. [3] reddy, a.; vasundhara, d.; subhash, p. sentiment research on twitter data, int. j. recent technol. eng., 8, 1068–1070, 2019. [4] jivani, a, a comparative study of stemming algorithms, int. j. comp. tech. appl. 2, 1930–1938, 2011. international journal of applied sciences and smart technologies volume 4, issue 1, pages 21-34 p-issn 2655-8564, e-issn 2685-9432 33 [5] armstrong, p., bloom’s taxonomy. vanderbilt university center for teaching. 2019. [6] cheeti, s. s. twitter based sentiment analysis of impact of covid-19 on education globaly, international journal of artificial intelligence and applications (ijaia), 12(3), 2021. [7] floradel s. relucio and thelma d. palaoag. sentiment analysis on educational posts from social media, in proceedings of the 9th international conference on eeducation, e-business, e-management and e-learning (ic4e '18). association for computing machinery, new york, ny, usa, 99–102, 2018. [8] adnan m, anwar k., online learning amid the covid-19 pandemic: students' perspectives, journal of pedagogical sociology and psychology. 2(1), 45-51. 2020. [9] sanjok lohar, the impact ofcovid-19 pandemic on education system, international journal of emerging technologies and innovative research, 8(4), 428-430, april 2021. [10] albalawi, r., yeap, t. h., & benyoucef, m., using topic modeling methods for short-text data: a comparative analysis, frontiers in artificial intelligence, 3, 42, 2020. [11] albalawi, r., yeap, t. h., & benyoucef, m, using topic modeling methods for short-text data: a comparative analysis. frontiers in artificial intelligence, 3, 42, 2020. [12] qomariyah, s., iriawan, n., & fithriasari, k. topic modeling twitter data using latent dirichlet allocation and latent semantic analysis, in aip conference proceedings 2194(1), 020093. aip publishing llc, december 2019. [13] slimane bellaouar, mohammed mounsif bellaouar, and issam eddine ghada. topic modeling: comparison of lsa and lda on scientific publications. in 2021 4th international conference on data storage and data engineering dsde association for computing machinery, new york, ny, usa, 59–64. 2021. [14] loria, s. textblob documentation. release 0.15. 2, 269, 2018. international journal of applied sciences and smart technologies volume 4, issue 1, pages 21-34 p-issn 2655-8564, e-issn 2685-9432 34 [15] sohangir, s., petty, n., & wang, d, financial sentiment lexicon analysis, ieee 12th international conference on semantic computing (icsc), 286-289, 2018. [16] eshan, s.c., & hasan, m.s., an application of machine learning to detect abusive bengali text, 20th international conference of computer and information technology (iccit), 1-6, 2017 [17] zhang,w.; yoshida, t.; tang, x. a comparative study of tf* idf, lsi and multi-words for text classification, expert syst. appl. 38, 2758–2765, 2011. [18] robertson, s., understanding inverse document frequency: on theoretical arguments for idf, journal of documentation, 60(5), 503-520, 2004. [19] george, m., soundarabai, p.b., & krishnamurthi, k., impact of topic modelling methods and text classification techniques in text mining: a survey, 2017. [20] salloum, s.a., al-emran, m., monem, a.a., shaalan, k, using text mining techniques for extracting information from research articles. in: shaalan, k., hassanien, a., tolba, f. (eds) intelligent natural language processing: trends and applications. studies in computational intelligence, 740. springer, cham. 2018. [21] deerwester, s.; dumais, s.t.; furnas, g.w.; landauer, t.k.; harshman, r. indexing by latent semantic analysis. j. am. soc. inf. sci. 41, 391–407, 1990. [22] david m. blei, andrew y. ng, and michael i. jordan, latent dirichlet allocation. j. mach. learn. res. 3, 993–1022, 2003 [23] hamed jelodar, yongli wang, chi yuan, xia feng, xiahui jiang, yanchao li, and liang zhao. latent dirichlet allocation (lda) and topic modeling: models, applications, a survey. multimedia tools appl, 78, 15169–15211. 2019. [24] mujahid m, lee e, rustam f, washington pb, ullah s, reshi aa, ashraf i. sentiment analysis and topic modeling on tweets about online education during covid-19, applied sciences. 11(18), 8438, 2021. international journal of applied sciences and smart technologies international journal of applied sciences and smart technologies volume 2, issue 2, pages 197–208 p-issn 2655-8564, e-issn 2685-9432 197 information technology and learning methodology amid the covid-19 pandemic hari suparwito department of informatics, faculty of science and technology, sanata dharma university, yogyakarta, indonesia, corresponding author: shirsj@jesuits.net (received 05-10-2020; revised 09-11-2020; accepted 09-11-2020) abstract the impact of the covid-19 pandemic on the education sector caused schools and universities are closed. then, teaching and learning are delivered by an online method through information and communication technology. some issues have emerged, especially on delivering materials and the minimum requirements of online learning. the study aims to review learning methodologies and the role of information and communication technology for future learning. heutagogy and computational thinking have been selected as the learning methodology for approaching digital native generation. it is no doubt that the significant role in undertaking online education is information and communication technology. therefore, we suggested some tools to enhance learning systems, such as gamification learning, virtual labs, and social media. we also discussed new learning media using information and communication technology in education. the study's contribution is to describe technology's role in the future learning system to be used by decision-makers in implementing e-learning better. keywords: covid-19, computational thinking, e-learning, heutagogy, internet technology, learning media mailto:shirsj@jesuits.net international journal of applied sciences and smart technologies volume 2, issue 2, pages 197–208 p-issn 2655-8564, e-issn 2685-9432 198 1 introduction covid-19 pandemic is affecting human lives around the world. just a week after who declared the coronavirus disease 2019 (covid-19) outbreak to be a pandemic on march 12, 2020, the u.n. educational, scientific and cultural organization estimated that 107 countries had implemented national school closures, affecting to 862 million children and young people cannot perform a regular education such as going to school, learning in the class and playing in the yard [1]. governments have temporarily closed schools to contain the spread of the virus. this action intends to prevent the spread of the virus within institutions and prevent carriage to vulnerable individuals. however, it also has had an impact on society and education sections. the school closures influence 80% of children worldwide even though some debates are ongoing about school closures' effectiveness on virus transmission [2]. in response to the covid-19 virus, online instruction and teaching have been proposed, and everyone in the school community needed to make some adjustments in education. many changes imposed on us by the covid-19 pandemic are shifts in how educational content is delivered, with a migration away from the conventional inclassroom experience to more technology-based virtual learning experiences [3]. the study examines how technology in the covid-19 time has changed the schools' conventional teaching and learning process. we discussed the role of information communication technology (ict), especially as the supporting system on successful teaching and learning when the school disclosure in the covid-19 pandemic. the study's contributions are first to see the possibilities in using some learning methodologies related to ict to address the coop when the school closed during the covid-19 pandemic. secondly, showing the potential factors should be considered when we want to implement e-learning and information technology to support our teaching and learning process. 2 teaching and learning methodologies when the schools are closed, and teaching and learning processes are not delivered using the conventional way, there is a possible way to teach and learn using ict. international journal of applied sciences and smart technologies volume 2, issue 2, pages 197–208 p-issn 2655-8564, e-issn 2685-9432 199 however, implementing ict in education is not about the hardware and what kind of technology used, but firstly, it is about the teaching and learning methodology. in the last two decades, some teaching and learning methodologies related to ict have been used in education. some methodologies have been created, such as heutagogy and computational thinking. hence, the methodologies describe users' characteristics and the materials that should be delivered based on ict. to understand more deeply, below, we pointed out the essential things in that methodologies. 2.1. heutagogy learning happens due to some experience inside the educational setting (i.e., the classroom or the e-learning site); however, in heutagogy, that has happened contrarily. learners choose and undertake according to what they want [4]. as a particular interest in distance learning, heutagogy is defined as the study of self-determined learning. distance education and heutagogy are intended for mature adult learners [5]. in heutagogy, the educational process changes from teachers to learners. heutagogy represents a change from teacher-centered learning to learner-centered learning [6]. heutagogy cannot be separated from andragogy. going back to the early seventies, knowles introduced andragogy. andragogy acknowledges that life experience could increase motivation and relevance in learning environments. in other words, adults can be more inclined to be self-directed in their learning, even though the curricula and the learning experience are still highly teacher-driven and directed [7]. andragogy was developed based on adult learners' characteristics such as the need to know, self-concept, experience, readiness -orientation-motivation to learn as a mature person. the learning shifts from one of subject-centeredness to one of problem centeredness. this assumption on andragogy has become a good platform for applying e-learning. as we know, in e-learning methods, the interaction between teachers and students occurs in the virtual and independent meeting. concerning e-learning, heutagogy could be seen in two levels of thinking. the first is related to do with the acquisition of knowledge. the second one is that the whole learning experience becomes much less predictable, and the learner's needs and motivation shift rapidly and not necessarily in concert with the teacher's aims or the international journal of applied sciences and smart technologies volume 2, issue 2, pages 197–208 p-issn 2655-8564, e-issn 2685-9432 200 curriculum. learning will probably enhance our excitement and enjoyment since matching the learners' needs and how they want to achieve their goals [8]. 2.2. computational thinking another central aspect of the modern teaching concept is to deliver the subjects using a computational thinking concept. according to wing [9], computational thinking focuses on the process of abstraction since the use of abstraction, automation, and analysis in problem-solving has become a crucial part of the modern era [10]. in general, computational thinking is the thought processes involved in formulating a problem and expressing the solution to carry out a computer or human effectively. computational thinking comes before any computing technology, thought of by a human, knowing full well the power of automation. focusing on the importance of computational thinking in the modern era of education, in 2016 international society for technology in education (iste) has released a learning standard for students. the standard has seven parts that should be on the class teaching (see figure 1). the standard describes the achievement of learning for the students in the future. it is interesting, iste included computational thinking as one factor in teaching to the digital native generation. two of seven aspects in the student standard referred to ict, namely computational thinker and digital citizen. figure 1. seven aspects of teaching to the digital native [11] according to iste, computational thinker means students could develop and integrate their ability to understand and solve the problems using technological methods. students perform technology-assisted methods such as data analysis, abstract models, and algorithmic thinking to explore and find solutions. digital tools are used to international journal of applied sciences and smart technologies volume 2, issue 2, pages 197–208 p-issn 2655-8564, e-issn 2685-9432 201 analyze and identify relevant real-world data, break problems into parts, extract information, and develop a descriptive model to understand problems and solve problems using computer programming or applications [11]. these aspects showed that the abstraction thinking process with computer assistance could increase the human capabilities to face daily life problems. 3 the future of learning we agreed that internet technology could be a possible way to overcome the educational process's difficulty in the pandemic era. using this kind of technology, teachers could deliver the materials and reach the students wherever they are. e-learning becomes the most effective learning technique. however, in the real context, we discover many aspects should be considered. this section discussed the teaching methods related to technologies and what we should do when implementing the method. we recognized some terms in the education field that should be known first when discussing teaching methods related to the modern world and technology. 3.1. e-learning. electronic learning (e-learning) is a type of education where the medium of instruction in computer technology. using e-learning, teachers, and students could create, store, access, and interact with the digital materials over the internet [12]. although e-learning is being used in teaching, it has yet to gain acceptance. one reason to use e-learning is that the traditional classroom cannot meet the present-day world's needs as modern technologies change the ways we learn [13]. it is still believed that elearning technologies can enhance the effectiveness of teaching. when the technology is implemented, teachers must use various learning resources to support learners using ict. at least four components should be mentioned as resources in e-learning. a. learning management system learning management system (lms) assists teachers and students in managing their courses on the e-learning system. ovarep (observatoire des ressources multimedias) defined, "the lms e-learning platform is a computing device that groups several tools and ensures the educational lines. across dedicated platforms to the odl (open and international journal of applied sciences and smart technologies volume 2, issue 2, pages 197–208 p-issn 2655-8564, e-issn 2685-9432 202 distance learning), all conduits are preserved and expanded for the learner, tutor, coordinator, and administrator within the e-learning platform" [14]. the most and the biggest lms offered various subjects from famous universities worldwide is the massive open online course (mooc) [15]. even though kloft et al., in their study, claimed that some enrolled students drop out of their selected study, the study described mooc's significant role in learning the modern system [16]. moodle is another example of open source lms [17]. it provides all the sophisticated high-level functionality of an educational course management system. colleges and universities have chosen this kind of lms, such as moodle, to deliver courses online. every subject can be made exciting and interactive with the help of cms. we concluded that e-learning is anything to do with computer technology to provide teaching and learning materials where the interaction between teachers and students has happened through ict. although e-learning is being used in teaching, it has yet to gain acceptance. many schools have used ict or ilt (information learning technology) to make learning more enjoyable and exciting. we also recognized that integrating technology into the curriculum causes stress and teacher burnout as they put more and more effort to meet the standards. nevertheless, we still believe that e-learning technologies can enhance the effectiveness of teaching. b. the internet the internet is being used as a significant teaching resource. the broadband has enabled users to access information from hundreds of websites using search engines such as google to research, create their web pages like html (hyper text markup language), and make their blogs [18]. these websites provide learners with education channels. it enables students to practice their reading, writing, listening, and speaking skills in languages with interactive activities and games and provide intensive and extensive subject-specific lesson exercises. it helps learners create self-tests, enabling them to monitor, record, and improve their progress [19]. c. computers or laptops computers and laptops are a widespread tool in our lives. the devices are used as the primary tool in e-learning, where teachers and students access the materials through them. with the introduction of wireless technology, learning can be done both inside international journal of applied sciences and smart technologies volume 2, issue 2, pages 197–208 p-issn 2655-8564, e-issn 2685-9432 203 and out of class. the teacher can add variety to teaching with audio-visual material and multimedia, thereby providing an inclusive, equitable, and motivating learning environment [20]. some software allows learners and teachers to create quality documents such as microsoft office. there is other specialist software like photoshop, moviemaker, and macromedia dream-weaver, which are popular with learners as they experiment with it creatively in their specialist fields. computers and laptops become the primary device for student's and teacher's interaction and express their idea. 3.2. a new learning media a. gamified learning kapp defined gamification is "using game-based mechanics, aesthetics and game thinking to engage people, motivate action, promote learning, and solve problems." [21]. some game features could be mentioned: users, challenges, or tasks to perform and progress towards defined objectives; points because of executing tasks. gamification learning means using the game's ideas and is applied to a different context of the games for increasing motivation and commitment in a learning atmosphere. elearning, based on modern ict, creates favorable conditions for the implementation of gamification – the processes of processing students' data and tracking their progress are automated, and software tools can generate detailed reports. implementation of game elements in education is logical since some facts are typical for games and training. there are many tools for gamification. some of them are web-based (cloud services) and do not require particular software installation and allow access anytime and from any location. among the most popular gamification tool is kahoot!, duolingo, classdojo, and goalbook. some of these tools are a free plugin to wordpress that automatically creates different achievement types and pages needed to set up a badging system. b. social media recently, the use of social media in education has been increasing. most teachers use a heutagogy approach for teaching and learning [22]. students and digital natives spend much time on social media. integrating social media apps into learning methodologies is among the most innovative ways. it will be connecting students to curriculum, learning resources, and one another. international journal of applied sciences and smart technologies volume 2, issue 2, pages 197–208 p-issn 2655-8564, e-issn 2685-9432 204 creating a facebook or whatsapp group specifically for the class group to post discussion topics or develop unique virtual classroom twitter hashtags students can use to discuss lessons or ask questions. this group application is a powerful tool to deliver materials since students and teachers access these apps frequently. c. digital content it is challenging and an excellent way for students to display their creative talents and convert them into a digital format. students can express and provide their opinions through blogs, videos, podcasts, and other digital art when asked to answer real-world problems. respecting each student's individuality and needs for creative expression helps them flourish as learners [23]. implementing digital content brings students' presentations to life by incorporating visual effects, photos, videos, and music into them. developing slideshows and digital presentations, playing music or a video for background and context while presenting, or by inviting virtual guest speakers to engage with the class via programs designed for conference calls such as zoom, skype, google meet are all fun and creative ways to boost engagement with lessons while teaching the benefit of technology and multimedia use. d. virtuals labs virtual reality technology is applied in education and teaching. virtual reality technology can create virtual training bases, virtual components, and equipment, which can generate new devices according to actual needs, and virtual reality has specific immersions and interactions. in virtual reality, learners can perform role-playing and develop skills to exercise more efficiently. there is no risk in the virtual training system. learners can practice repeatedly and, finally, master skills. however, there is a difference between the virtual training system and the real environment, during the process of using the virtual system to perform system skills, the process should be emphasized. depending on virtual reality technology, virtual labs can also be realized. a virtual laboratory is a reproduction of a real laboratory and can also realize a laboratory's virtual concept. compared with the traditional laboratory, the advantage is more pronounced. the first is to create a virtual reality hardware device to complete the international journal of applied sciences and smart technologies volume 2, issue 2, pages 197–208 p-issn 2655-8564, e-issn 2685-9432 205 experiment. the second is to avoid the risks that exist in real experiments and to upgrade the laboratory. virtual reality technology has played an auxiliary role in education, allowing teachers to liberate themselves from the original and relatively complicated teaching equipment and create false impressions. it allows some teachers to think that students can finish learning by using virtual reality technology, and teachers' role is weakened. 4 discussions it is no doubt that technologies, primarily ict, could provide a significant role in replacing traditional in-class teaching amid the covid-19 pandemic. it is believed that using technology will produce these advantages such as improving the quality of learning, reducing the cost of education, and enhancing teacher-students interaction become more significant. however, after nearly seven months of us undergoing the covid-19 pandemic, several problems emerge regarding online learning. to overcome the issues, some suggestions are listed below. a. infrastructure for the third world countries, infrastructure could be the first barrier in implementing e-learning. hardware and software for the learning system are expensive. the learning materials should be delivered through the internet, and in some countries, the price of internet bandwidth is also costly. government intervention is required by making a new regulation on the use of internet bandwidth data. furthermore, teachers at schools or universities should be creative produce learning materials using various kinds of technology using a little internet bandwidth data such as avoid using streaming apps, videos, or movies. b. learning content creating learning content is difficult and takes time. students are not satisfied when teachers delivered learning materials only in a powerpoint format or a pdf file format. teachers are needed to be creative in content creation. use interactive learning methods, therefore finding interactive ways of learning such as games, crossword puzzles, dramas, and self-expression videos. international journal of applied sciences and smart technologies volume 2, issue 2, pages 197–208 p-issn 2655-8564, e-issn 2685-9432 206 c. resources human is unique. we are different from each other. it should be reminded that not all teachers have the same ability in e-learning, so do the students. even though we are separated physically, however, we try to interact with others personally. collaboration between teachers is the key. creating learning content must not be done alone. we can ask our colleagues to help if we are not accustomed to creating digital content such as videos, games, or animations. encourage the students to work together with friends. teamwork is much appreciated d. delivery system delivery systems are the crucial point in e-learning. one crucial aspect of the delivery system is to determine the material delivery by synchronous or asynchronous systems. in some ways, the synchronous system is better but more expensive related to e-learning technology. the student's conditions come first. sometimes, it is easy to find an internet connection; however, in other areas is problematic. some students have computers and laptops, but others have not. the most important point is that students get their learning materials quickly and cheaply. 5 conclusions the covid-19 pandemic has opened a new way of thinking in learning methodologies. technology in future learning could be seen as new tools to enhance delivering materials and learning content. computers and other technologies (including virtual labs) will never replace teachers. however, teachers could incorporate technological tools to enhance their teaching quality and their students' learning. these new technological tools create a new way of thinking in teaching. some efforts should be spent to develop, publish, and incorporate in educational sectors. suppose we wish to improve the teaching performance. in that case, we must reinforce the new technological tools' pedagogical facets and develop programs that incorporate these new tools as integral elements of the teaching practice rather than external technical aids to the teaching and learning process. it needs to be concerned to develop the appropriate international journal of applied sciences and smart technologies volume 2, issue 2, pages 197–208 p-issn 2655-8564, e-issn 2685-9432 207 pedagogy to support the new ict so they can collaborate to deliver a new way of learning for the digital native student. acknowledgement the author thanks colleagues at the faculty of science and technology, sanata dharma university, yogyakarta, indonesia for the assistance and encouragement. references [1] r.m. viner, s.j. russell, h. croker, j. packer, j. ward, c. stansfield, et al. “school closure and management practices during coronavirus outbreaks including covid-19: a rapid systematic review.” the lancet child & adolescent health, 2020. [2] w. van lancker and z. parolin. “covid-19, school closures, and child poverty: a social crisis in the making.” the lancet public health, 5 (5) e243–e244, 2020. [3] f.m. reimers and a. schleicher. “a framework to guide an education response to the covid-19 pandemic of 2020.” oecd 2020. [4] s. hase. “heutagogy and e-learning in the workplace: some challenges and opportunities.” impact: journal of applied research in workplace e-learning, 1 (1), 43–52, 2009. [5] l.m. blaschke. “heutagogy and lifelong learning: a review of heutagogical practice and self-determined learning.” the international review of research in open and distributed learning, 13 (1), 56–71, 2012. [6] s. hase and c. kenyon, self-determined learning: heutagogy in action: a&c black, 2013. [7] s. hase and c. kenyon. “heutagogy: a child of complexity theory.” complicity: an international journal of complexity and education, 4 (1), 111–118, 2007. [8] c. englund, a.d. olofsson and l. price. “teaching with technology in higher education: understanding conceptual change and development in practice.” higher education research & development, 36 (1), 73–87, 2017. [9] j.m. wing. “computational thinking.” communications of the acm, 49 (3), 33–35, 2006. international journal of applied sciences and smart technologies volume 2, issue 2, pages 197–208 p-issn 2655-8564, e-issn 2685-9432 208 [10] j. cuny, l. snyder and j. wing, computational thinking: a definition.(in press). 2010. [11] iste. [cited on 4 october 2020]; available from: https://iste.org/standards/forstudents, 2020. [12] a.w. bates and t. bates, technology, e-learning and distance education. psychology press, 2005. [13] s. kuraishy and m.u. bokhari. “teaching effectively with e-learning.” international journal of recent trends in engineering 1 (2), 291, 2009. [14] d.s. walker, j.r. lindner, t.p. murphrey, k. dooley. “learning management system usage.” quarterly review of distance education 17 (2), 41–50, 2016. [15] mooc. [cited on 4 october 2020]; available from: https://www.mooc.org/, 2020. [16] m. kloft, f. stiehler, z. zheng, n. pinkwart. “predicting mooc dropout over weeks using machine learning methods.” proceedings of the emnlp 2014 workshop on analysis of large scale social interaction in moocs. 2014. [17] moodle [cited on 4 october 2020]; available from: https://moodle.org/, 2020. [18] a. gunasekaran, r.d. mcneil and d. shaul, e‐learning: research and applications. industrial and commercial training, 2002. [19] d. smith and g. hardaker. “e-learning innovation through the implementation of an internet supported learning environment.” journal of educational technology & society 3 (3), 422–432, 2000. [20] i. han and w.s. shin. “the use of a mobile learning management system and academic achievement of online students.” computers & education 102, 79–89, 2016. [21] k.m. kapp, the gamification of learning and instruction: game-based methods and strategies for training and education. john wiley & sons, 2012. [22] l.m. blaschke, “using social media to engage and develop the online learner in self-determined learning.” research in learning technology 22, 21635, 2014. [23] m. weller and t. anderson. “digital resilience in higher education.” european journal of open, distance and e-learning 16 (1), 53, 2013. https://iste.org/standards/for-students https://iste.org/standards/for-students https://www.mooc.org/ https://moodle.org/ international journal of applied sciences and smart technologies international journal of applied sciences and smart technologies volume 2, issue 1, pages 1–8 p-issn 2655-8564, e-issn 2685-9432 1 solving inverse problems interestingly rommel r. real 1,2 1 mathematical sciences institute, college of science, the australian national university, canberra, australia 2 department of mathematics, physics, and computer science, university of the philippines mindanao, mintal, davao city, the philippines corresponding author: rommel.real@anu.edu.au (received 28-06-2019; revised 17-10-2019; accepted 17-10-2019) abstract inverse problems deal with recovering the causes for a desired or given effect. their presence across sciences and their theoretical study can be traced to a classic example. moreover, their ill-posedness motivates computational methods to solve them, and we give a very humble introduction to them. in relation we discuss their active research trends. keywords: divergence measure, ill-posed equation, inverse problem, regularization, noisy data 1 introduction inverse problems are everywhere. the simplest example out there would be human vision. from measurements of scattered light that reaches our retinas, our brain constructs a detailed 3-dimensional image of the world around us. but how is this possible with only a limited number of points in our surroundings? our brain has a way to complete the image by interpolating and extrapolating the data acquired from the identified points. constructing this image is actually an example of what is called an inverse problem. international journal of applied sciences and smart technologies volume 2, issue 1, pages 1–8 p-issn 2655-8564, e-issn 2685-9432 2 there is no universal definition to inverse problems. one might say that inverse problems are concerned with determining causes for a desired effect [4]. the american mathematician j.b. keller gave this well-cited definition [12]: “we call two problems inverses of one another if the formulation of each involves all or part of the solution of the other. often, for historical reasons, one of the two problems has been studied extensively for some time, while the other is newer and not so well understood”. this implies that for an inverse problem to be defined, first there must be an underlying direct problem. we will consider some examples. 2 an important example one of the most well-studied inverse problems is known as calder ́n’s problem. back in 1980 a. p. calder ́n published a paper “on an inverse boundary value problem”. it aims to solve this problem: is it possible to determine the electrical conductivity of a medium by making voltage and current measurements at the boundary of the medium. this inverse method is also known as electrical impedance tomography (eit). interestingly, calder ́n had been researching on this problem while working as a petroleum engineer for argentina’s state oil company back in 1947. he stayed there for a few years before embarking a full-time career in mathematics. his paper became a ground-breaking one: it started a new era of mathematical research on inverse problem. later on eit would find itself applied in medical imaging. in fact, a prize for outstanding contribution in the field of inverse problems is named after him (check out calderon prize). you may check out gunther uhlmann’s paper [16] for a summary of progress in calder ́n’s problem. calder ́n’s problem is a good example on how real-world problems motivate mathematical research. in geophysics, one needs to explore the earth’s subsurface using reflected seismic waves. in ecology, space monitoring is used to assess trends of biological diversity using reconstructed satellite images. in medical imaging, tumors in a part of human body must be accurately detected from the attenuation of x-rays. mathematically, these inverse problems are described using operator equations whose international journal of applied sciences and smart technologies volume 2, issue 1, pages 1–8 p-issn 2655-8564, e-issn 2685-9432 3 specific properties need to be studied with rigour. in fact, the progress in mathematical theory of inverse problems is independent of the growing applications, though the gap between them remains wide. this gap can be filled by showing more applications where the theoretical results hold, promoting a deeper appreciation of inverse problems. 3 regularization an inverse problem can be expressed as an operator equation in the form ( ) , (1) for a specific exact data , be a parameter-to-observation mapping between (they can be hilbert, banach, or even a general topological spaces), for a given parameter variable . inverse problems have one common property that makes them really hard to solve: they are ill-posed. in mathematics, a problem is well-posed if the following three are satisfied: the problem has a solution, it has a unique solution, and the solution depends continuously on the data. if one of the three is violated, the problem becomes ill-posed. for instance, the mathematical problem of eit consists of a highly nonlinear partial differential equation, which is very ill-posed. to address instability, we can solve the original ill-posed problem by solving a sequence of approximating well-posed problems. this technique is called regularization. applying some numerical scheme (e.g. the ones from optimisation) to an ill-posed problem may lead to different solutions. regularisation ensures a specific solution with desirable properties is recovered. for instance, suppose we have a noisy data . instead of solving the ill-posed equation (1), we solve an equivalent minimisation problem. ( ( ) ) (2) for a an admissible set and * + be the divergence measure between the reconstrution ( ) and the noisy data . since (1) is ill-posed, (2) is also ill-posed, but easier to handle. now we describe a very common regularization method. in variational regularization, we solve a surrogate of (2), given by international journal of applied sciences and smart technologies volume 2, issue 1, pages 1–8 p-issn 2655-8564, e-issn 2685-9432 4 ( ( ) ) ( ) (3) where * + is the penalty functional and is a positive constant regularization parameter. now, (3) is stable due to the promotion of the penalty term, and can be solved using optimisation algorithms. an example of (3) common in statistics and machine learning is tikhonov regularization, or ridge regression, where normally the penalty term favors solutions with smaller norms, while the functional is the least-squares functional. the parameter serves as a trade-off between data fidelity and stability, and must be fine-tuned. if the regularization parameter is chosen via a given parameter choice rule to solve (3), we can obtain the solution with this best possible trade-off. moreover, finding an appropriate regularization parameter to solve (3) is a very active of research in inverse problems, and usually done by imposing assumptions on the ill-posed problem. alternatively, iterative regularization can be used. to describe, let us assume and to be hilbert spaces with norm given by ‖ ‖. the simplest of this type is landweber method, which is simply the gradient method applied to solving ‖ ( ) ‖ starting an initial guess , the (nonlinear) landweber iteration [5] is given by ( ) ( ( ) ) (4) where is a relaxation constant chosen often between and (can be constant for all ). since the original problem (1) is ill-posed, (4) must be terminated after a finite number of iterations to ensure the best reconstruction possible. in this case the iteration acts as the regularization parameter. as in variational regularization, choosing an appropriate using an appropriate parameter choice rule is also an active research area. in some problems, variational regularization in terms of (3) may lead to solving large and sparse systems of equations, which can be computationally expensive. in addition, for every value of taken from an interval (e.g. exponential grid), a full numerical scheme must be implemented to solve to solve. in contrast, iterative regularization have lesser computational burden, but could take a large number of iterations. due to this international journal of applied sciences and smart technologies volume 2, issue 1, pages 1–8 p-issn 2655-8564, e-issn 2685-9432 5 observation, iterative methods are known to be slow but reliable, which makes them more preferred. however, variational regularization has one important advantage: it can be formulated for problems using generalized divergence/residual terms ( ( ) ) where in (3) does not have to be a norm in banach spaces. some instances are in eit, where the measurement error of the data could depends on the location of the measurement (i.e. error is larger in areas closer to the ultrasound receiver). this is also the case when measuring probability measures. for a convenient overview of these methods, the reader may refer to [9,13]. lately, in some publications theoretical results often in this research area go hand-inhand with numerical examples. some numerical examples come from anywhere optimal control, plasma physics, economics, etc. theoretical results usually involve convergence rates, in terms of the noise level ( ), they allow comparisons between regularization methods. however this is not be doable in some settings, such as for iterative regularization methods in banach spaces [8,11,14]. the more important type of theoretical result is the proof of regularization property: given a parameter choice rule, the approximate solution produced by the regularization method should converge to the (unknown) desired exact solution as the noise level decays. 4 current trends now i want to cite some important research trends in the field of regularization for inverse problems. one of the most well-studied types of parameter choice rules is the discrepancy principle, which states to pick the regularization parameter or once the residual becomes smaller than a multiple of the noise level. mathematically, for iterative methods it means to pick such that ( ( ) ) ( ( ) ) for a given . for both variational and iterative regularization methods this has been well-studied [4,5,14]. however, notice that aside from the noisy data , the discrepancy principle requires information of the noise level . in some instances such international journal of applied sciences and smart technologies volume 2, issue 1, pages 1–8 p-issn 2655-8564, e-issn 2685-9432 6 information may not available, and either overestimation or underestimation of the noise level may lead to an unsatisfactory solution of the inverse problem. hence, for regularization methods, it is necessary to develop heuristic rules which do not need the noise level. although bakushinskii’s veto [1] states that heuristic rules cannot lead to convergence in the worst case scenario for any regularization method, it is still possible to prove some convergence results under such rules under specific assumptions on the noisy data. a well-studied heuristic rule was formulated by hanke and raus [6] and it has been developed since then, for instance in [7]. another growing trend is studying regularization methods in banach spaces. the standard reference book [4] compiles fundamental results under hilbert spaces. the need for extending these results towards banach spaces stems out naturally when the sought solution is non-smooth, e.g. when it is sparse or piecewise constant. hilbert spaces are too smooth to cover such solutions. a strategy is to introduce a nonsmooth penalty term, which includes the -norm and the total variation (tv), in iterative regularization [2,8]. for variational regularization, one way to obtain convergence results is to impose more general conditions on the sought solution [7]. obtaining results under banach spaces also allows to cover both linear and nonlinear problems, leading to more computationally challenging examples [15]. this is a great development since many inverse problems are expressed in terms of highly nonlinear partial differential equations, leading to nonlinear and often non-differentiable forward operators. in this area obtaining convergence results using more general assumptions and extending them towards general topological spaces [13] is still very active. another important issue involves the forward operator. for instance, problem (4) is formulated assuming is differentiable, in a more general sense. it is possible to obtain a modified landweber method where instead of the derivative of , an operator approximating the derivative can be used, leading to a derivative-free iterative method [10], chapter [4]. so far only a few types of derivative-approximating operators have been known [3], and different types may lead to different convergence analyses. exploring more problems with nonsmooth forward operators is always a welcome research area. in some problems where the direct problem involves not fully linear international journal of applied sciences and smart technologies volume 2, issue 1, pages 1–8 p-issn 2655-8564, e-issn 2685-9432 7 pdes, the corresponding forward operator may be too complicated to formulate, so solving inverse problems with it would be necessary. 5 conclusion inverse problems are still starting to gain better understanding, dating back around thirty years. it’s an area that requires expertise in computational maths, such as in solving pdes to optimisation. obtaining theoretical result involves lots of estimation in various settings, from banach spaces to sobolev spaces! it’s a relatively new area that needs more people to move in! references [1] a. b. bakushinskii, “remarks on choosing a regularization parameter using the quasioptimality and ratio criterion,” ussr computational mathematics and mathematical physics, 24 (4), 181 182, 1984. [2] r. bot and t. hein, “iterative regularization with a general penalty term-theory and applications to and tv regularization,” inverse problems, 28 (10), article id 104010, 1 19, 2012. [3] c. clason and v. h. nhu, “bouligand-landweber iteration for a non-smooth illposed problem,” numerische mathematik, 142 (4), 789 832. [4] h. w. engl, m. hanke, and a. neubauer, regularization of inverse problems, kluwer academic publishers, london, 1996. [5] m. hanke, a. neubauer, and o. scherzer, “a convergence analysis of the landweber iteration for nonlinear ill-posed problems,” numerische mathematic, 72 (1), 21 37, 1995. [6] m. hanke and t. raus, “a general heuristic for choosing the regularization parameter in ill-posed problems,” society for industrial and applied mathematics journal on scientific computing, 17 (4), 956 972, 1996. [7] q. jin, “hanke-raus heuristic rule for variational regularization in banach spaces,” inverse problems, 32 (8), article id 085008, 1 18, 2016. international journal of applied sciences and smart technologies volume 2, issue 1, pages 1–8 p-issn 2655-8564, e-issn 2685-9432 8 [8] q. jin, “landweber-kaczmarz method in banach spaces with inexact inner solvers,” inverse problems, 32 (10), article id 104005, 1 26, 2016. [9] b. kaltenbacher and a. klassen, “on convergence and convergence rates for ivanov and morozov regularization and application to some parameter identification problems in elliptic pdes,” inverse problems, 34 (5), article id 055088, 1 24, 2018. [10] b. kaltenbacher, a. neubauer, and o. scherzer, iterative regularization methods for nonlinear ill-posed problems, walter de gruyter gmbh & co. kg, berlin, 2008. [11] b. kaltenbacher, f. sch ̈fer, and t. schuster, “iterative methods for nonlinear ill posed problems in banach spaces: convergence and applications to parameter identification problems,” inverse problems, 25 (6), article id 065003, 1 19, 2009. [12] j. b. keller, “inverse problems,” the american mathematical monthly, 83 (2), 107 118, 1976. [13] c. poschl, tikhonov regularization with general residual term, dissertation, leopold franzens universit ̈t innsbruck, 2008. [14] f. schöpfer, a. k. louis, and t. schuster, ”nonlinear iterative methods for linear ill-posed problems in banach spaces,” inverse problems, 22 (1), 311 329, 2006. [15] t. schuster, b. kaltenbacher, b. hofmann, and k. s. kazimierski, regularization methods in banach spaces, walter de gruyter gmbh & co. kg, berlin, 2012. [16] g. uhlmann, “30 years of calder ́n’s problem”, s ́minaire laurent schwartz-edp et applications, ́cole polytechnique, 1 25, 2012-2013. international journal of applied sciences and smart technologies international journal of applied sciences and smart technologies volume 2, issue 1, pages 23–34 p-issn 2655-8564, e-issn 2685-9432 23 characteristics of solar still with heat exchangers on the cover glass ignasius dinang yudra nanda 1,* , f. a. rusdi sambada 1 , yulianto 1 , angelina boru sitio 2 1 department of mechanical engineering, sanata dharma university, yogyakarta, indonesia 2 department of mathematics and natural sciences education, sanata dharma university, yogyakarta, indonesia * corresponding author: dinangyudra@gmail.com (received 14-07-2019; revised 19-01-2020; accepted 19-01-2020) abstract drinking water is one of the main needs of human life. water sources do not always have the feasibility to drink. distillation is one way to obtain potable water. distillation can influence on drinking water quality. low performance is a major problem in distillation. one way to improve performance is by increasing the input water temperature. one way to increase the input water temperature by utilizing heat energy on the cover glass. this study aims to improve the performance of solar energy water distillation and input water temperature by a heat exchanger. this research method is experimental. the measured variables are absorber temperature in the distillation model ( ), glass temperature ( ), ambient temperature ( ), amount of distilled water produced ( ) and heat energy coming from solar energy ( ). this research uses variations, namely with a white or black heat exchanger and without heat exchanger. water quality tested using 4 parameters such as microbiology, inorganic chemistry, physical and international journal of applied sciences and smart technologies volume 2, issue 1, pages 23–34 p-issn 2655-8564, e-issn 2685-9432 24 chemical. based on this study, the best results produced by a black heat exchanger using a water flow rate of which produce distillation water and efficiency value of while the best results using white heat exchanger is using water flow rate which produces distillation water of and efficiency value of , distilled water has increased in quality even though it has not met the standards no. 492 of 2010. keywords: distillation, solar energy, glass cover, water quality, efficiency. 1 introduction water is one of the basic human needs that is used mainly for drinking water. not all regions in indonesia have water sources that are suitable for drinking, as is the case in gunung kidul district, yogyakarta, which is experiencing difficulties with clean water [1]. the water sources that are often contaminated with soil, salt (seawater) or other materials. water with these conditions can interfere with health if used directly. distillation is evaporation of dirty water (contaminated water) then the steam is condensed again. steam from dirty water does not carry substances that pollute it so that the water produced from this steam condensation is feasible to drink [2]. the low performance of distillation is caused by the ineffectiveness of the evaporation and condensation process [4]. one way to increase the effectiveness of the evaporation process is to increase the temperature of the water entering the distillation. increasing the temperature of the water entering the distillation makes the water more easily evaporated. a common way to increase the temperature of the input water is to use a solar energy water heater collector. this method requires a lot of investment costs for the manufacture of water heater collectors. in this study, another method that is cheaper and easier to use is to increase the temperature of the input water by utilizing the heat energy contained in the cover glass. the utilization of heat energy in the cover glass is done by adding a simple heat exchanger. the heat exchanger integrated into the cover glass functions to take heat energy on the cover glass and give it to the input water. this causes the temperature of international journal of applied sciences and smart technologies volume 2, issue 1, pages 23–34 p-issn 2655-8564, e-issn 2685-9432 25 the input water to increase and the glass temperature to decrease. increasing the temperature of the input water will increase the effectiveness of the evaporation process and the reduction in glass temperature will increase the effectiveness of condensation. this study will analyze the utilization of heat energy in a cover glass to improve the performance of water distillation with solar energy. the purpose of this study is to increase the temperature of input water distillation with heat exchanger, increase the performance of the distillation of solar water by utilizing heat energy in cover glass, and to determine the quality analysis of the quality of distilled water as drinking water. 2 research methods this research method is experimental. the distillation model used in indoor test includes 3 variations, white or black heat exchanger and without heat exchanger. while the variations used for outdoor include without heat exchanger and with a black heat exchanger. the input water flow rate used in indoor and outdoor research is , and . data collection for each variation is done every 3 sunny days. data recording is done by using sensors that are regulated by the arduino microcontroller. in figure 1, we can see the scheme of the tools used in this study. the main parts of this solar still include input water reservoir, heat exchanger, cover glass, absorber, non-evaporating water tank, and clean water tank. figure 1. parts of solar still international journal of applied sciences and smart technologies volume 2, issue 1, pages 23–34 p-issn 2655-8564, e-issn 2685-9432 26 for water analysis using 2 water sources, namely the source of river water and wells. analysis of water quality carried out in the government laboratory i.e. bbtklp is good for analysis before being distilled or after being distilled. data analysis used to see the performance of the distillation models used include solar energy: (1) where is solar energy for the evaporation process and is the solar energy received by the distillation apparatus . the rate of evaporation is: ⁄ (2) where me is the rate of evaporation of the mass of water and is the latent heat of evaporation of water . some heat energy is converted and radiated from glass to the environment. constructed energy is calculated by: (3) where q is convection to the environment, h is the convection coefficient, hca , t1 is the temperature of the glass, and is the ambient air temperature ( ), ( ). glass radiation energy into the environment is calculated using: [ ] ( ) (4) where is radiation to the environment, is stefan boltzmann’s is the glass temperature, ( ), is the sky temperature, ( ), is the emissivity of water, and are celestial emissivity, ( ). the energy for evaporation is calculated by the equation: ( ) (5) where and are partial vapor pressures at water and glass temperatures. the heat energy in the cover glass that can be utilized can be calculated by the equation: (6) international journal of applied sciences and smart technologies volume 2, issue 1, pages 23–34 p-issn 2655-8564, e-issn 2685-9432 27 where m is the input water flow rate ( ), cp is the water heat capacity , is the temperature of the input water entering the heat exchanger (ᴼc) and is the temperature of the water entering the exchanger ( ). the results of the analysis of water data obtained are then evaluated by comparing the data obtained with drinking water quality requirements according to the minister of health regulation no. 492 of 2010 [3]. if the data obtained meets the conditions specified by the minister of health regulations as described in the theoretical basis, distilled water qualify as drinking water and can be consumed directly. 3 results and discussion the results of tools that work indoor can be seen in figures 2, 3 and 4. in figures 2, 3 and 4 that variations without heat exchanger are marked with brown, variations using a white heat exchanger are marked in white, and variations using a black heat exchanger are marked in black. figure 2. graph of the average temperature difference between absorber and glass at indoor figure 2 is a graph of the average temperature difference between absorber and glass produced by various variations of water flow discharge and heat exchanger. the highest difference in temperature between the absorber and glass in the variation using international journal of applied sciences and smart technologies volume 2, issue 1, pages 23–34 p-issn 2655-8564, e-issn 2685-9432 28 black heat exchanger was obtained using flow rate which is then followed by flow rate i.e. , the lowest average temperature at 2 l/h flow rate which is . figure 3 is a graph of the volume results generated by various variations of water flow rate and heat exchangers. the image shows the highest volume produced by the variation of the black heat exchanger using an input water flow rate of which is , while the lowest volume uses input water flow rate of which is figure 3. graph of results of indoor distillation data figure 4. indoor data collection efficiency graph international journal of applied sciences and smart technologies volume 2, issue 1, pages 23–34 p-issn 2655-8564, e-issn 2685-9432 29 figure 4 is a graph of efficiency produced by several variations of input water flow rate and heat exchanger. the graph shows the variation using a black heat exchanger which produces the highest efficiency using water flow rate which is while the lowest efficiency uses water flow rate which is the results of tools that work outdoors can be seen in figures 5, 6, and 7. outdoor data collection is carried out for 7 hours. figure 5. graph of outdoor distillation using black heat exchanger in figure 5, it is known that the highest yield of black heat exchanger variations uses water flow rate which is , while the lowest results are obtained using a 1.3 l/h water flow rate which is international journal of applied sciences and smart technologies volume 2, issue 1, pages 23–34 p-issn 2655-8564, e-issn 2685-9432 30 figure 6. average graph of the amount solar energy when taking outdoor data figure 6 is the average solar energy that occurs during data collection. the biggest energy occurs when the data collection uses a water flow rate reaching and the smallest at water flow rate is . figure 7. graph of efficiency of the distillation tool when outdoor figure 7 is the efficiency produced during data collection. it can be seen that the highest efficiency is obtained at water flow rate, which is and the lowest is water flow rate which is . international journal of applied sciences and smart technologies volume 2, issue 1, pages 23–34 p-issn 2655-8564, e-issn 2685-9432 31 table 1. standard quality of water analysis no parameter wells water river water maximum level before after before after 1 arsenic ( ) 2 fluoride ) 3 total chromium ( ) 4 cadmium ( ) 5 nitrite ( ) 6 nitrate ( ) 7 cyanide ( ) 8 selenium ( ) 9 iron ( ) 10 hardness ( ) 11 chloride 12 manganese ( ) 13 ph 14 zinc ( ) 15 sulphate ( ) 16 copper ( ) 17 ammonia ( ) 18 nickel ( ) 19 sodium ( ) 20 chlorine ( ) 21 lead ( ) 22 organic matter ( ) 23 detergent ( ) 24 aroma no no no no no 25 color (tcu) 26 tds ( ) 27 turbidity (mtu) 28 taste no no no no no 29 temperature (ᴼc) water temperature 30 e.coli (ccfu/ ) 31 total coliform tntc international journal of applied sciences and smart technologies volume 2, issue 1, pages 23–34 p-issn 2655-8564, e-issn 2685-9432 32 for the results of testing the water quality standards can be seen in table 1, that wells and rivers that have been distilled have not met drinking water quality standards according to the minister of health regulation no. 492 of 2010. the wells and rivers after being distilled have decreased and increased levels of physical and chemical and microbiological water. decreased levels of physical, chemical and biological wells after distillation occur in the nitrate, selenium, iron, hardness, chloride, sulfate, sodium, lead, organic matter, detergent, color, tds, coliform and turbidity. whereas in river water there is a decrease in nitrite, nitrate, iron, hardness, chloride, sulfate, sodium, organic matter, chlorine, color, tds, coliform, ph, temperature, and turbidity. the increase in the chemical content of distilled well water occurs in the nitrite ph, temperature and zinc content. while the distillation of river water has increased levels of zinc, selenium, lead, and detergent. basically water after being distilled decreases the physical and biological levels of chemicals contained in water. in this study, the increase in chemical content in water can occur due to human error or contamination by the equipment used. even though there was an increase in the chemical content contained in well water and river water some of the contents still included drinking water quality standards except detergent in river water. some of the contents have decreased levels of physical, chemical and biological decline, making some of the content into the drinking water quality standard. 4 conclusion from the results of this study, some conclusions can be obtained, including: 1. the highest water volume is obtained using a black heat exchanger at a flow rate of i.e . 2. the highest efficiency is obtained by using a black heat exchanger at a flow rate of i.e 3. wells and river water after being distilled have better quality even though they have not met the standards of quality of the permenkes no. 492 of 2010. international journal of applied sciences and smart technologies volume 2, issue 1, pages 23–34 p-issn 2655-8564, e-issn 2685-9432 33 acknowledgements thank you to the ministry of research, technology, and higher education of the republic of indonesia for financing this research. references [1] metrotvnews, kabupaten gunung kidul krisis air bersih. jakarta: metrotvnews. 2017. [2] h. effendi. telaah kualitas air bagi pengelolaan sumber daya dan lingkungan perairan. yogyakarta: kanisius. 2003. [3] d. purwodianto, fa. rusdi sambada,”unjuk kerja destilasi air energi surya menggunakan kondenser pasif ,“ jurnal penelitian, 17 (1), 34 41, 2013. [4] http://www.airminum.or.id/permenkes-4922010-persyaratan-kualitas-airminum.html (accessed on 22-10-2019). http://www.airminum.or.id/permenkes-4922010-persyaratan-kualitas-air-minum.html http://www.airminum.or.id/permenkes-4922010-persyaratan-kualitas-air-minum.html international journal of applied sciences and smart technologies volume 2, issue 1, pages 23–34 p-issn 2655-8564, e-issn 2685-9432 34 this page intentionally left blank international journal of applied sciences and smart technologies international journal of applied sciences and smart technologies volume 2, issue 1, pages 75–88 p-issn 2655-8564, e-issn 2685-9432 75 smart campus mobile application toward the development of smart cities tanweer alam 1,* , yazeed mohammed alharbi 1 , firas adel abusallama 1 , ahmad osama hakeem 1 1 department of computer science, faculty of computer and information systems, islamic university of madinah, saudi arabia * corresponding author: tanweer03@iu.edu.sa (received 23-01-2020; revised 09-02-2020; accepted 09-02-2020) abstract smart campus is an android mobile application that has strong features to facilitates students, faculties, admins, parents, and managers. it provides a comprehensive integrated solution to improve the overall performance of the college. the use of mobile applications is increasing day by day. we all use the mobile application in a lot of things in our daily life. we came up with an idea of building an application that will help the students and faculty member to accesses the system as fast as possible. in this study, we will discuss the problem and propose a solution to it, as we will talk about the planning phase and its component, requirements the functional and nonfunctional with its data flow diagram, etc. we also design the interfaces. this mobile application will help the students and faculty members to do the tasks in the minimum time. keywords: mobile application, android, smart campus, smart cities. international journal of applied sciences and smart technologies volume 2, issue 1, pages 75–88 p-issn 2655-8564, e-issn 2685-9432 76 1 introduction the problem in this situation is about time if you want to check in the system you have to go to the browser, next search about the university, then looking for the page to help for go on the system put if we have an application we will shortcut a lot of steps to go on it. after we studied the problem and knowing the student's and faculty member's needs [1]. we decided to create a mobile application that will serve those who need to do his tasks such as check the schedule, add and remove a course, and many other functions [18,19]. develop a software platform to be able to access the digital college system as fast as possible. a. collect requirements about students and faculty members. b. analyze the collected requirements and propose solutions to these problems. c. attract the largest number of students and faculty members. d. students and faculty members can use the system at any time and easy to use. what makes our application different than the other applications we talked about (islamic university)? after we installed the application and tested, we explored that application depends on the web page for the islamic university, it created as an interface [4, 5, 6]. actually, all icons linked by the web pages there is no database depend on it. for example, if a student wants to show the schedule, he will click on the icon on the academic system then will transfer to the web page for islamic university [7,8,9,10]. also, the problem with this system is that it is supported only by the arabic language. that application creates just for the students it does not support faculty members and deans [11,12,13,14]. however, in our application, there is a database. it is an independent application that does not depend on web pages. also, it supports english language. it can use by students, faculty members, and deans. the rest of the paper is organized as follows. the related works are explained in section 2, the research methodologies are explained in section 3, results and discussion are explained in section 4, and the conclusion is explained in section 5. international journal of applied sciences and smart technologies volume 2, issue 1, pages 75–88 p-issn 2655-8564, e-issn 2685-9432 77 2 related works this article is based on ideas from several experts, including : 1. imam muhammad bin saud university the official application of the imam mohammed bin saud islamic university provides application services for many of the employees of the university students, staff, faculty members, visitors, and others [15, 16, 17]. the application provides the most important news and announcements and events about the university in addition to university maps and academic calendar and telephone directory and others [2]. the most important services of students: student information, schedule, grades, exam schedule, etc. the most important services of staff and faculty: salaries vacations allowances inboxes. there are obtains as shown in figure 1. figure 1. imam muhammad bin saud university 2. qassim university application qaseem university online service in figure 2 is a mobile application that aim is to provide communication with students, faculties, management etc. also, students can add, remove courses for their schedule, show marks, transcripts, and plan [3, 19]. international journal of applied sciences and smart technologies volume 2, issue 1, pages 75–88 p-issn 2655-8564, e-issn 2685-9432 78 figure 2. qassim university application 3. islamic university mobile application: you can follow through the application in figure 3 : a. university news and announcements b. access to the university's personal account c. booking travel administration dates d. view the names of the admissions e. university calendar f. follow the requests of the university administration and communicate with the rector. g. save your notes in the application (days and tests) h. department of messages received international journal of applied sciences and smart technologies volume 2, issue 1, pages 75–88 p-issn 2655-8564, e-issn 2685-9432 79 figure 3. islamic university. 3 research methodology this diagram explains how the user can enter the system we mean by user (student, faculty member, manager, admin) when the user opens the application he will enter his user name and password then the application will check with database, the databases response message [20, 21] to the application if the user name and password match in the database he will enter to home page otherwise reject. there are obtains as shown in figure 4 below. figure 4. login to the system international journal of applied sciences and smart technologies volume 2, issue 1, pages 75–88 p-issn 2655-8564, e-issn 2685-9432 80 in figure 5 there is the following sequence diagram, show how the admin give approve to account by choose account in the application then give him approve in the database after that the approved account can enter the system figure 5. approve account the next sequence diagram shows how the student can add or remove a course from his schedule [22, 23]. if the student wants to add a course first choose course name in the application. it will add the course to the database and the database will response message to the application then the course will be in the student schedule. remove course is similar to add the course. the student will choose a course in the application, then the course will be deleted from the database. the database will respond message to the application course removed. course removed from the student schedule. this is obtains as shown in figure 6. figure 6. add/remove course international journal of applied sciences and smart technologies volume 2, issue 1, pages 75–88 p-issn 2655-8564, e-issn 2685-9432 81 the following diagram explains how the student can evaluate the courses and faculty members through questionaries. after the student selectsa specific course in the application, he will start the evaluation of the course and faculty member and submit the evaluation to the application [24, 25]. the application in figure 7 will store the submission in the database. the database will respond message to the application evaluation stored. figure 7. evaluate course and faculty member the following sequence diagram shows how faculty member set marks for students. after choosingthe student the faculty member can set the marks in the application. the application will store the marks in the database [26, 27, 28]. the database will respondto message marks stored. the application will show the marks in figure 8. figure 8. set marks to student international journal of applied sciences and smart technologies volume 2, issue 1, pages 75–88 p-issn 2655-8564, e-issn 2685-9432 82 the next diagram shows how a manager can give permission. after selecting students or faculty members and determine the type of permission in the application. the application will update the database. the database will respondto the message the student or faculty member has permission for the application. students or faculty members can now use the new permission. that is obtains as shown in figure 9. figure 9. give permission from the manager the following sequence diagram in figure 10 explains how the admin update data. first, the admin will select the user (student, faculty member, manager) and change data in the application [29, 30, 31]. the application will store the data in the database. the database will respond message to the application data updated [32, 33]. the application will show the updated data. figure 10.update data from admin international journal of applied sciences and smart technologies volume 2, issue 1, pages 75–88 p-issn 2655-8564, e-issn 2685-9432 83 4 results and discussions the first interface is the login interface.it let the user enter his name and his password to enter into the application. there is a forget password button for the turn to another page to create a new password. in addition, the sing up button for a turn to the page to create new accounts. there are obtains as shown in figure 11, 12, and 13 below: figure 11. application interfaces this is the registry interface and you have the option of choosing between student, faculty, and administrator. to complete registration in the application. username, password, and repeat password are necessary. this is the main menu interface for a student. there are processes for students such as showing profiles, absence, schedule, and academic registration. also can do evaluate for the courses. this is the main menu interface for faculty members. there are processes for faculty members such as showing the evaluation of section and previous marks. in addition, they can make some operations such as insert attendance and marks.this is the registry interface and you have the option of choosing between student, faculty, and administrator. to complete registration in the application. username, password, and repeat password are necessary. this is the main menu interface for a student. there are processes for students such as showing profiles, absence, schedule, and academic registration. also can do evaluate for the courses. this is the main menu interface for faculty members. there are processes international journal of applied sciences and smart technologies volume 2, issue 1, pages 75–88 p-issn 2655-8564, e-issn 2685-9432 84 for faculty members such as showing the evaluation of section and previous marks. in addition, they can make some operations such as insert attendance and marks. figure 12.application interfaces figure 13.application interfaces 5 conclusions the authors have done all the initiation processes that start from problem statement moving through the proposed solution into objectives and the aim. we studied and clarify three of related work that has some similarity of our project, which are 'application of the imam mohammed bin saud university, qassim university international journal of applied sciences and smart technologies volume 2, issue 1, pages 75–88 p-issn 2655-8564, e-issn 2685-9432 85 application, islamic university application. each one of them has one or more properties we are going to use in our application, also they have some negative we must avoid. the survey is done for information gathering from users. we collected the information that we gathered from the survey and we analyzed it, some uml diagrams and data flow diagrams as well as, the interface at the end. we have worked hard and diligent in this project, and believe we have done very well, and hope that everyone likes and uses our application, and making their life easier, and that's what we've tried to achieve for them. references [1] a comprehensive education management suite, http://www.auromeera.com/ (accessed on 10-02-2020). [2] al imam university, https://play.google.com/store/apps/details?id=com. imamuniversity.app&hl=ar (accessed on 10-02-2020). [3] al qaseem university, http://cutt.us/ukdaz (accessed on 10-02-2020). [4] a. tanweer, “5g-enabled tactile internet for smart cities: vision, recent developments, and challenges”, jurnal informatika, 13 (2), , 2019. [5] v. r. ganesh, “android college management system”, international journal of advanced research in computer engineering & technology, 5 (4) , 2016. [6] n. m. z. hashim and s. n. k. s. mohamed, “development of student information system”, international journal of science and research (ijsr), 2 (8), , 2013. [7] what is sdlc waterfall model ?, http://cutt.us/thhif (accessed on 10-02-2020). [8] m. rouse, requirements analysis, 2007, http://cutt.us/3di7w (accessed on 10-022020). [9] u. eriksson, the difference between functional and non-functional requirements. 2015, http://cutt.us/5icj1 (accessed on 10-02-2020). [10] bisk. what is swot analysis? http://cutt.us/xmfja. (accessed on 10-02-2020). [11] use case diagram, http://cutt.us/sq4me (accessed on 10-02-2020). http://www.auromeera.com/ https://play.google.com/store/apps/details?id=com http://cutt.us/ukdaz http://cutt.us/thhif http://cutt.us/3di7w http://cutt.us/5icj1 http://cutt.us/sq4me international journal of applied sciences and smart technologies volume 2, issue 1, pages 75–88 p-issn 2655-8564, e-issn 2685-9432 86 [12] islamic university, http://cutt.us/oqdrp (accessed on 10-02-2020). [13] just in mind, http://cutt.us/1rnzi (accessed on 10-02-2020). [14] sequence diagram, http://cutt.us/s38iq (accessed on 10-02-2020). [15] a. tanweer and m. benaida, “the role of cloud-manet framework in the internet of things (iot)”, international journal of online engineering (ijoe), 14 (12), 97 110, 2018. [16] a. tanweer, “middleware implementation in cloud-manet mobility model for internet of smart devices”, international journal of computer science and network security, 17 (5), 94, 2017. [17] a. tanweer and m. benaida, “cics: cloud–internet communication security framework for the internet of smart devices”, international journal of interactive mobile technologies (ijim). 12 (6), 74 84, 2018 [18] a.tanweer and b. rababah, “convergence of manet in communication among smart devices in iot”, international journal of wireless and microwave technologies(ijwmt), 9 (2), 1 , 2019. [19] a. tanweer, “iot-fog: a communication framework using blockchain in the internet of things”, international journal of recent technology and engineering (ijrte), 7 (6), 2019. [20] a. tanweer, “blockchain and its role in the internet of things (iot)”, international journal of scientific research in computer science, engineering and information technology, 5 (1), 151 157, 2019. [21] a. tanweer, “a reliable framework for communication in internet of smart devices using ieee 802.15.4”, arpn journal of engineering and applied sciences, 13 (10), 3378 3387, 2018. [22] a. tanweer, “a reliable communication framework and its use in internet of things (iot)”, international journal of scientific research in computer science, engineering and information technology (ijsrcseit), 3 (5), 450 456, 2018. [23] a. tanweer and m. aljohani, “design and implementation of an ad hoc network among android smart devices”, green computing and internet of things (icgciot), 1322 1327, 2015. http://cutt.us/oqdrp http://cutt.us/s38iq international journal of applied sciences and smart technologies volume 2, issue 1, pages 75–88 p-issn 2655-8564, e-issn 2685-9432 87 [24] a. tanweer and m. aljohani, “an approach to secure communication in mobile ad-hoc networks of android devices”, international conference on intelligent informatics and biomedical sciences (iciibms), 371 375, 2015. [25] m. aljohani and a. tanweer, “an algorithm for accessing traffic database using wireless technologies”, computational intelligence and computing research (iccic), ieee international conference, 1 4, 2015. [26] a. tanweer and m. aljohani, “design a new middleware for communication in ad hoc network of android smart devices”, proceedings of the second international conference on information and communication technology for competitive strategies, 1 6, 2016. [27] a. tanweer, “fuzzy control based mobility framework for evaluating mobility models in manet of smart devices”, arpn journal of engineering and applied sciences, 12 (15) , 4526 4538, 2017. [28] a. tanweer, a. p. srivastava, s.gupta and r. g. tiwari, “scanning the node using modified column mobility model”, computer vision and information technology: advances and applications, 455, 2010. [29] a. tanweer and b. k. sharma, “a new optimistic mobility model for mobile ad hoc networks”, international journal of computer applications, 8 (3), 1 4 , 2010. [30] a. tanweer, “cloud computing and its role in the information technology”, iaic transactions on sustainable digital innovation (itsdi), 1 (2), . 2020. [31] a. tanweer, a. s. salem, a. o. alsharif, and a. m. alhejaili, “smart home automation towards the development of smart cities”, aptikom journal on computer science and information technologies, 5 (1).2020. [32] m. aljohani and a. tanweer, “design an m-learning framework for smart learning in ad hoc network of android devices,” proceedings of computational intelligence and computing research, ieee international conference on, , 2015. international journal of applied sciences and smart technologies volume 2, issue 1, pages 75–88 p-issn 2655-8564, e-issn 2685-9432 88 [33] m. aljohani and a. tanweer. “real time face detection in ad hoc network of android smart devices”, advances in computational intelligence : proceedings of international conference on computational intelligence. 2015. international journal of applied sciences and smart technologies international journal of applied sciences and smart technologies volume 4, issue 1, pages 1-20 p-issn 2655-8564, e-issn 2685-9432 1 cryptocurrencies advantages and disadvantages: a review zaer qaroush1, shadi zakarneh1, and ammar dawabsheh1 1m.sc. student, palestine technical universitykadoorei tulkarem, palestine *corresponding author: zakarnehshadi@gmail.com (received 07-04-2022; revised 08-05-2022; accepted 21-05-2022) abstract with the rapid spread of technology in life and the necessary need to increase the speed of payment processes, confidentiality, and privacy, cryptocurrencies appeared. a cryptocurrency is a virtual and intangible currency, in which transactions are made through the internet. these currencies are characterized by decentralization, transparency, and privacy. because transactions are carried out through a cryptography process and depend on blockchain technology it is highly protected. blockchain generally is a distributed ledger or a decentralized database. the blockchain architecture combines advanced cryptography, consensus mechanisms, and a complex system of incentives. in cryptocurrency, transactions are created, transferred, and verified through an integrated process called mining. blockchain technology architecture has given cryptocurrencies many advantages and features that increase their strength and distinction from regular financial transactions such as decentralization, confidentiality, anonymity, very low fees, unrestricted by geography, transparency, protection from inflation, and the peer-to-peer network. the misuse of powerful features in cryptocurrencies and blockchain technology has led to many disadvantages such as the risks of lack of knowledge, the lack of wide this work is licensed under a creative commons attribution 4.0 international license http://creativecommons.org/licenses/by/4.0/ international journal of applied sciences and smart technologies volume 4, issue 1, pages 1-20 p-issn 2655-8564, e-issn 2685-9432 2 acceptance, the high risk of investment, its volatile nature, and the inability to return missing payments. this study concentrates on cryptocurrencies in terms of advantages and disadvantages. keywords: cryptocurrency, blockchain, mining, miners, wallet, bitcoin 1 introduction since ancient times, mankind has used more than one method of payment in commercial transactions, such as barter, precious metals coins, and then paper money, which is the most widespread to this day. as a result of the development in the technology market especially the internet, the human race is still looking for new methods of payment suitable to the needs and development of technology, the most recent method is cryptocurrencies which were theoretically laid by chaum in 1983 relying on a fundamental principle not to misuse and accelerate production [1]. a cryptocurrency is a digital asset that uses cryptography to encrypt transactions and monitor the production of additional currency units as a means of exchange. cryptocurrencies are categorized as virtual currencies and as alternative currencies [2]. bitcoin is considered one of the most prominent cryptocurrencies, it was founded by a group that uses the pseudonym satoshi nakamoto [1][2]. numerous other cryptocurrencies have been created since then. as a fusion of bitcoin derivatives, they are often referred to as 'altcoins' which are decentralized power. the decentralized control of the distributed ledger function is related to the bitcoin blockchain transaction database [2]. besides users have complete control over their own money, users can even send very small payments like the so-called satoshi, which is equal to 0.00000001btc [3]. bitcoin is “designed by people for people” and the rules are imposed on everyone through one another's mutual distrust [3], it has provided solutions to the doublespending problem and byzantine generals problem, these innovations made the use of cryptocurrencies possible [4]. previously before the invention of bitcoin, it was impossible to deal electronically without a trusted third party, for example, paypal was used as a trusted third party [4]. international journal of applied sciences and smart technologies volume 4, issue 1, pages 1-20 p-issn 2655-8564, e-issn 2685-9432 3 to be sure that the same bitcoin has not been used or spent previously, the blockchain is used to examine new transactions, it is a peer-to-peer network that carries out the verification process by distributing the work to all users in the network to utilize their computing power to reconcile and maintain the ledger in the blockchain [4]. solving the problem of double-spending presents another problem concerning the newly added nodes and the current nodes. as for the new nodes, how to make sure that it has the correct ledger?. as for the current nodes, the problem is how it knows that it is getting correct updates of the ledger?, this problem is known as byzantine generals problem [4]. 2 literature review there are over a thousand cryptocurrency specifications as of october 2017; most are identical to and derived from the first centralized cryptocurrency, bitcoin, which was completely implemented. the mines are mutually suspicious parties, and maintain the protection, credibility, and balance of the ledger inside cryptocurrency systems: to verify the timestamp, computers belonging to members of the public are used, and they are later added to the ledge according to a firm time stamp. most cryptocurrencies are designed to progressively reduce currency supply, putting a final limit on the total amount of currency that will simulate the precious metals in circulation. for law enforcement agencies, cryptocurrencies in terms of seizing them or keeping them as cash in hand are more difficult than traditional currencies owned by financial companies. the leverage of cryptographic technology results in this difficulty [2]. in 1998 wei dai published an anonymous distributed electronic cash system 'bmoney'. shortly afterward, nick szabo " invented "bit gold. which is an electronic currency scheme, including other cryptocurrencies that would follow it which allowed users to complete a proof of work feature with cryptographically put together and published solutions. the work of dai and szabo's was followed by hal finney, who later developed a reusable proof-of-work currency system [2]. bitcoin was the first decentralized cryptocurrency developed by satoshi nakamoto in 2009. as its proof-of-work scheme, it used sha-256, a cryptographic hash function. in international journal of applied sciences and smart technologies volume 4, issue 1, pages 1-20 p-issn 2655-8564, e-issn 2685-9432 4 april 2011, namecoin was created as an attempt to create a decentralized domain name server (dns), which would make it very difficult to censor the internet. litecoin has launched in october 2011. it was the first popular cryptocurrency using script instead of sha-256 as its hash function. the first to use a proof-of-work/proof-of-stake hybrid was peercoin, another prominent cryptocurrency. iota (distributed ledger technology) was the first non-blockchain-based cryptocurrency to use the tangle instead. while few have been popular, several other cryptocurrencies have been developed, because they brought very little technological innovation [2]. a. reasons to use cryptocurrency cash payment is characterized as an easy, effective, and fast payment method, but using this method has many disadvantages, including exposure to fraud, loss of money, and costs of managing transactions with financial institutions [5]. many reasons encourage the trend to use cryptocurrencies, among these reasons are confidentiality and security, to maintain a high degree of security and great confidentiality, cryptocurrencies use encryption methods that use public keys and private keys in transactions and payments, as well as the cryptocurrency transactions carried out easily and quickly. also due to cost, payment, and money transfer operations are done at the lowest possible cost, as there are no banks or intermediary monetary institutions to transfer funds between the parties [5]. b. blockchain simply the cryptocurrency consist of a public and distributed ledger (distributed database) referred to as blockchain, which users can use to record their transactions. blockchain technology is designed to manage cryptocurrencies [6]. blockchain is a decentralized solution for managing data and transactions. through this solution, the data is shared and recorded in multiple data stores called a distributed ledger. they are controlled by a network of distributed servers called nodes [7]. in blockchain, data can only be added and cannot be deleted, the data is in the form of a series of transaction blocks. to protect data and transactions, an encryption method called cryptography is used, and to create data and verify the continuous growth in the data structure, specific mathematical algorithms are used to achieve this [7]. international journal of applied sciences and smart technologies volume 4, issue 1, pages 1-20 p-issn 2655-8564, e-issn 2685-9432 5 depending on the practical application of the blockchain, it is divided into two main types: 1permissionless blockchain: there is no central authority to control users or to control joining or preventing anyone from joining the network. a person only needs a computer with the necessary software installed on it and thus he can join and perform the transactions he wants and store them in the decentralized ledger, an identical copy of the ledgers is distributed to all nodes in the network [7]. 2permissioned blockchain: for a person to join the network, approval must be made by the network validators based on specific parameters and rules that are determined by the network administrator, who defines validators and rules. the permissioned blockchain is divided into two subcategories, (a) private permissioned blockchain which restricts access on the network to an administrator to update the ledger, and creates, and stores transactions. also known as enterprise permissioned blockchain. (b) public permissioned blockchain where the network can be accessed and viewed by anyone [7]. c. how the blockchain works blockchain can be likened to a distributed database. any node of the network nodes (network members) starts in addition to this database when it creates a data block, then the created data block is broadcasted in encrypted form to all network members, and the other network nodes determine the validity of the data based on the predefined validation algorithm, this method called “consensus mechanism”. after the block validation is completed, the blocks are added to the blockchain, and then the distributed transaction ledger will be updated [8]. see figure 1. international journal of applied sciences and smart technologies volume 4, issue 1, pages 1-20 p-issn 2655-8564, e-issn 2685-9432 6 figure 1. how blockchain works [8] the blockchain network member has two keys for his transactions public and private keys. the public key will be known to all network members and used as an address on the blockchain and to validate the sender's identity by verifying the digital signature. the private key is used to create the transaction digital signature. these keys are kept in a digital wallet online or offline [8]. d. blockchain consensus mechanisms as the process of validating the added data block is done by a group of blockchain network nodes in a decentralized manner to ensure its legitimacy, there must be an agreement between the nodes on the method of validation, this is known as the consensus mechanism, which is a predefined encrypted method within certain parameters. it ensures the correct sequence of transactions in the blockchain. as an example of this in the case of cryptocurrencies, these mechanisms include preventing the problem of double payment [9]. the blockchain uses a certain consensus international journal of applied sciences and smart technologies volume 4, issue 1, pages 1-20 p-issn 2655-8564, e-issn 2685-9432 7 mechanism based on the resources required and the expected results. different types of consensus mechanisms used in blockchain technology are as follows: 1proof of work (pow): it is known as mining and the network nodes are the miners. this process needs large-scale computing power to enable miners to solve complex mathematical puzzles. many cryptocurrencies like bitcoin used this mechanism [10]. 2proof of stack (pos): in this mechanism, who has the priority to produce the next block is determined, this is done through a random process. for the user to be able to produce blocks, he must become a validator, and this is done in several ways, either after the user has locked his token for a certain period, or the validator is chosen based on the blockchain design. another way for the user to be a validator is by keeping the coins for the largest time or owning the biggest stack gives a greater opportunity for the user to become a validator. proof of stack is considered to be more energy efficient than other mechanisms [10]. 3delegated proof of stack (dpos): in this mechanism, delegates are chosen based on a vote from the users, where the user can share his coins and vote for a specific number of delegates, as the user’s share of the coins affects the user’s voting weight, more user stack mean larger vote weight. the delegate who gets the most votes has the opportunity to produce new blocks. the delegate rewards are like other blockchain consensus mechanisms. this mechanism is one of the fastest blockchain consensus mechanisms [10]. 4proof of capacity (poc): in this mechanism, digital stores are used to store mathematical puzzle solutions. in this mechanism, users who are faster to find solutions get a chance to create a new block. this process is called plotting. in this mechanism, users with higher storage capacity have more chances to produce new blocks [11]. 5proof of authority: this mechanism is a modified version of the proof of stack mechanism. in this mechanism, the identities of the validators in the network are on the stack. in this mechanism, the identity is used as a correspondence between the personal identity of the validators and their official documents, to verify their identity. in the mechanism of establishing authority, the nodes that become international journal of applied sciences and smart technologies volume 4, issue 1, pages 1-20 p-issn 2655-8564, e-issn 2685-9432 8 validators are the only nodes that are authorized to produce new blocks. to secure and protect the blockchain network, validators whose identity is at stack are incentivized [11]. 6proof of activity: this mechanism is a mixture of the proof of work mechanism and the proof of stack mechanism. this mechanism relied on miners and auditors. in this mechanism, the blocks that are generated are simple blocks that contain the mining reward address and header information. header information is used to define a random group of validators for block signature, in which case the validators with the highest stacks are the ones selected to sign the new block. the miners try to solve the puzzle and claim their reward. when the designated validators sign on, the new block becomes part of the network. the block must be signed by all validators in order not to be ignored. in this mechanism, the network fees generated by the process are distributed between the winning miner and the validators [11]. e. mining mining is an integrated process in which cryptocurrency transactions are created, transmitted, and verified. ensures a stable, safe, and secure prevalence of currency from payer to recipient. cryptocurrencies are decentralized and work on a peer-to-peer system, and they are unlike fiat currencies that are controlled and regulated by the central authority. banks need a huge infrastructure to generate currency and monitor transactions. but the cryptocurrency that uses the mining system overcomes this need, as it is the responsibility of miners or nodes to verify and monitor transactions [12]. when a transaction is made, details of the transaction are broadcast to all network nodes. to form a block the transactions that take place during a specific period are summarized. the system is designed to merge transparency into it, whereby all transactions made are maintained and recorded in blockchain [12]. miners play a major role in mining, as they verify the property of currency from source to destination. each transaction contains a previous retail transaction that was performed by and through the owner. the current position is tested for validity and thus international journal of applied sciences and smart technologies volume 4, issue 1, pages 1-20 p-issn 2655-8564, e-issn 2685-9432 9 validated. miners prevent double-spending on currency through the verification process [20]. the main objective of mining is to create and issue currencies in the currency economy. after making and verifying the transactions, it is the role of the miners to collect these transactions and include them in the blocks that they are currently solving. before the blocks are broadcast, they must be resolved and then placed in the blockchain. solving blocks includes difficult mathematical puzzles to crack and unlock. miner is allowed to add the block to the ledger only when solving a math puzzle, and as a result, they are awarded a reward [13]. f. mining requirements cryptocurrencies are mined using special machines for this purpose called a "mining machine". mining history starts from the cpu to the currently widely used asics. new machines with better efficiency than previously designed machines have been developed as a result of the cyclical growth of mining difficulty [12]. 1during the beginning of mining, the cpu was used efficiently with hash rates lower than or equal to 10 mbps to mine cryptocurrencies. to deal with mining previously, having a computer with the necessary software installed on it was sufficient. but because of the increasing difficulty of mining, the use of cpus became insufficient because the hash rates became high and thus we need advanced mining machines. cpuminer was one of the popular cpu mining software [12]. 2since the cpu mining power did not meet the increasing requirements, the cpu was used with graphic cards for coin mining. graphics cards contain gpus, which are used to solve complex polygons and high mathematical functions used in games. different hash-based algorithms are used by different cryptocurrencies to solve blocks of transactions that require high math; therefore gpus are considered an alternative to cpu mining [12]. expanding the hash rate of cryptocurrency through gpus is the goal of pushing the boundaries of consumer computing in amazing new ways. despite the benefits of gpu, there are some limitations: • gpus are more expensive compared to normal cpus [12]. international journal of applied sciences and smart technologies volume 4, issue 1, pages 1-20 p-issn 2655-8564, e-issn 2685-9432 10 • each gpu must be connected to a pci-e 16x or 8x slot, of these, are relatively few on commercial motherboards [12]. • failure to use all components such as ram, motherboard, and hard disk in gpu mining leads to an increase in the mining cost [12]. • gpus require a high additional power of 200-300 watts to mine effectively [12]. • because gpu takes two slots in a motherboard, it became difficult to connect more than one gpu to a computer to get better performance [12]. 3fpga ( field programmable gate array): it has the feature to configure after manufacturing because it contains clb, which is configurable logical blocks that contain the property of reconfiguring, and also contains ram and logic gates. its power consumption is a fifth less than gpus. it is also good at hash-based algorithms like sha256 used in bitcoin transactions [12]. 4asic (application specific integrated circuit): bitcoin asics designed specifically for bitcoin mining is effective in complex mining mathematical tasks, with high speed and efficiency [12]. g. wallets an electronic wallet is a type of electronic card and is used for transactions over the internet using computers or smartphones. its utility is the same as a debit or credit card. cashless transaction technology has seen growth in the past year [15]. to help move away from the monetary economy e-wallets are being used. as a result of all transactions in the economy being calculated in this process, the size of the parallel economy decreases. widespread in rural areas after the spread in urban areas of the mobile phone wallet. hence, wallet funds see a very bright future very soon. in this section, we will try to study the types of electronic wallets and how to use them and talk about bitcoin as an example [14]. to pay the money the wallets collect private keys to access their bitcoin address. they appear in various forms, especially for special types of devices. and to avoid international journal of applied sciences and smart technologies volume 4, issue 1, pages 1-20 p-issn 2655-8564, e-issn 2685-9432 11 placing it on the computer, paper storage can be used. it is important to have a backup copy of your bitcoin wallet and to secure it [14]. bitcoin has become a new way of cash, and it is starting to find acceptance among merchants as a method of payment. the mechanism of transactions and the method of creating them became known, and it remained to be known how they were stored? the money is stored in a physical wallet, and bitcoin is stored in a wallet, but it is a digital wallet [15]. to be precise, bitcoin is not stored anywhere. rather, it is the protected digital keys used to access public bitcoin addresses and sign transactions [15]. there are five main types of wallets [14]: 1. desktop wallets to run a wallet you need to install the original bitcoin core client, and you might not even know it. this program allows you to create a bitcoin address to obtain and transfer virtual currency, collect the private key for that as well as migrate transactions to the network. multibit operates on linux, mac osx, and windows. hive is an os x-based wallet with some features, one of which is an app store that has direct contact with bitcoin services. some desktop wallets are specified to enhance security: armory falls into this group. darkwallet it uses a lightweight browser add-on to deliver services including currency mixing where users' currencies are exchanged for others, to prevent citizens from being tracked [14]. 2. online wallets web-based wallets store private keys on a computer connected to the internet and access to it is restricted by the user. due to the availability of these services through the internet, and their connection to a computer and mobile phone wallets, which causes duplicate addresses between the devices used by you [15]. advantages • the time required to complete a transaction is short [16]. • storing a small amount of cryptocurrency is recommended [16]. international journal of applied sciences and smart technologies volume 4, issue 1, pages 1-20 p-issn 2655-8564, e-issn 2685-9432 12 • some digital wallets are used to store and transfer many different cryptocurrencies between them [16]. • tor network is used for more privacy [16]. disadvantages • there is a third party that fully controls the digital wallet [16]. • when using a digital wallet it is recommended to use a personal computer and it is important to install security software [16]. • various online fraud operations are a result of a lack of knowledge in information technologies, which exposes users to various frauds [16]. 3. mobile wallets using a special application on your smartphone, the wallet can save your private keys to bitcoin addresses, and thus you can pay directly using the mobile phone. bitcoin wallet can take advantage of near field communication (nfc), letting you tap a mobile phone versus a reader and pay with bitcoin without ever having to provide any information [15]. advantages • more useful and easier to use than other types of cryptocurrency wallets [16]. • the possibility of using the tor network for more privacy [16]. • provide using a qr code to scan [16]. disadvantages • because mobiles are not secure devices. the loss of the user's private encryption codes may happen in case the user's mobile was hacked [16]. • mobile wallets are vulnerable to malware and viruses [16]. 4. hardware wallets it is in the form of a usb device with a program, and some of it contains a screen, so the user does not need a computer to complete the transaction see figure 2. it provides user control over the cryptocurrency with the ability to store digital assets for a long time [15]. international journal of applied sciences and smart technologies volume 4, issue 1, pages 1-20 p-issn 2655-8564, e-issn 2685-9432 13 advantages • a usb wallet with a display screen is the most secure [16]. • more secure than other wallets [16]. disadvantages • too difficult to buy [15]. • there are risks of use for beginners so it is not recommended for them [16]. figure 2. hardware wallet [17] 5. paper wallets one of the cheapest and most impressive options for keeping your bitcoins safe is seen in figure 3. many websites offer paper bitcoin wallet services. they will create your bitcoin address with an image containing the two qr codes: one for the public address which use to receive bitcoins; and the other for the private key, which use to pay the bitcoins stored at this address. in a paper wallet, private keys are not stored digitally on a computer or mobile device and therefore are not subject to electronic attacks or the risk of hardware failure or loss [15]. advantages • it is kept in the user's wallet or pocket, and there is no need for a computer connection [16]. disadvantages • need more time to complete the transaction [16]. international journal of applied sciences and smart technologies volume 4, issue 1, pages 1-20 p-issn 2655-8564, e-issn 2685-9432 14 figure 3. paper wallet [18] 3 results and discussion based on previous studies and literature reviews, blockchain is a decentralized database, each member of the network maintains a complete, synchronized, and verified copy of the database that contains all transactions. the blockchain architecture combines advanced cryptography, distributed consensus mechanisms, and a complex system of incentives and rewards. blockchain architecture makes it have a group of characteristics including the inability to alter transactions or fraud, and it does not require any trust in the integrity of the participants but it ensures absolute credibility in the system, it is outside of censorship. if two participants want to conduct a transaction between them, it cannot be prevented, and because of the short settlement time, which is close to zero, the speed of the final settlement and its verification leads to faster capital and increased liquidity. the cryptocurrency is a virtual currency that exists only in electronic form on the internet. the cryptocurrency was introduced as a digital currency for financial exchange independently of banks or financial institutions. whereas blockchain is the technology that underlies cryptocurrencies to conduct, secure, verify, and store transactions, these currencies gain many advantages because of their characteristics of blockchain. as the ledger database is distributed over the network, each member of the network has a complete copy of the ledger database. whereas, the miners are the members of the network where the process of verifying transactions is carried out by miners. therefore, there is no central authority to control the network and the transactions that take place on it or to individually control the database. this is what distinguishes cryptocurrencies as being decentralized. with the advantage of decentralization, the transaction cannot be prevented or controlled. therefore, since the user has a cryptocurrency wallet, he can international journal of applied sciences and smart technologies volume 4, issue 1, pages 1-20 p-issn 2655-8564, e-issn 2685-9432 15 perform any number of transactions and transfers at any time and anywhere without restrictions. thus, the cryptocurrency has unlimited transactions. in conducting transactions in cryptocurrencies, there are no high fees. as in the case of purchase, there are very low fees for the crypto process and mining operations. here, fees in mining operations do not go to a central authority but are distributed to the miners who have verified the reliability and validity of the transaction, and these small fees are exposed to one party of the transaction, which is the buyer. in contrast to banks that impose multiple types of fees for transactions, currency transfers, accounts, and database management. this makes cryptocurrency operations significantly lower in fees and costs compared to banks. also as a result of the short time of mining operations and the absence of a central authority such as banks to control the transaction and make approvals, this makes the transaction progress very fast, which distinguishes the cryptocurrency by the short time in its transactions. since cryptocurrencies are virtual currencies and their transactions take place over the internet, and because transactions in them are not subject to the control of a central authority, it became possible to conduct transactions between countries and outside borders without any hassles or restrictions. this is what makes cryptocurrency a crossborder currency. blockchain stores cryptocurrency transactions in blocks that are recorded in the distributed ledger. in cryptocurrency, each user has a crypto address, and the user can specify whether the crypto address is public or not. if the user sets their crypto address public, then other users will be able to see how much crypto is for that user. if the address is not public, then no one can know the amount of crypto for the user. this adds a transparency advantage to cryptocurrencies. in cryptocurrencies, the user can create his wallet or any number of wallets without referring to his name, address, or any other real information. this makes anonymity another feature of the cryptocurrency to maintain and protect privacy. by using the cryptography process, no person can perform any payment transaction from the wallet except by the owner. cryptocurrencies use cryptography and the use of public and private keys to achieve a high level of security. international journal of applied sciences and smart technologies volume 4, issue 1, pages 1-20 p-issn 2655-8564, e-issn 2685-9432 16 in cryptocurrencies, transactions are carried out by a large number of distributed servers, which may be in the hundreds or more. as the cryptocurrency has no main server to control and manage transactions. the wallet software installed on users' computers in the network is part of the network. the process of exchanging transactions and payments between two or more members of the cryptocurrency network who are installed the wallet software is done directly so that neither banks nor governments can control the exchange of money in them. thus cryptocurrency networks form a peer-topeer network. since cryptocurrencies do not follow a central authority nor are they controlled by companies or governments. cryptocurrencies are limited to use and mining, therefore there is no possibility for any party or authority to change the system or develop inflation in the system. accompanied by more and more expansion in the use of technology that adds to the world a lot of advantages and disadvantages that depend on how the technology is used and the purpose of its use. in addition, the use of cryptocurrencies is also expanding as the advantages of their use are accompanied by many disadvantages that depend on some of their characteristics, how they are used, and the goal of their use. while blockchain-based cryptocurrency technology is somewhat complex, the user needs to know about it and learn it well before starting to invest in it. the lack of knowledge about it exposes the user to the risks of hackers. since cryptocurrencies are still not accepted by many countries, as well as not being accepted until now in many online buying and selling sites. this makes it impractical to use it for everyday buying and selling, making it not widely accepted. although the cryptography and anonymity features in cryptocurrencies are considered two of its strong advantages, this gave the possibility to use it in financing illegal business and prohibited activities. in addition to the lack of a central authority to issue and supervise it, there is no legal guarantee in the event of bankruptcy. thus, there is a high risk of investing in cryptocurrencies. as well as the volatile nature of cryptocurrencies, which increased the fear of people and companies from investing in them. as large volatilities, increase the risk of investing in the cryptocurrency. international journal of applied sciences and smart technologies volume 4, issue 1, pages 1-20 p-issn 2655-8564, e-issn 2685-9432 17 in cryptocurrencies, the possibility of recovery is not available, as it is not possible to recover any wrong payment without the consent of the other party only. this increases the risks of using cryptocurrencies and requires more caution and attention before conducting any transaction. as a result of the characteristics of cryptocurrencies and the features of blockchain technology, which is the technology on which the cryptocurrency is based, there are many advantages and disadvantages of cryptocurrencies as shown in table 1. table 1. cryptocurrency advantages and disadvantages no. advantages disadvantages 1. decentralization lack of knowledge 2. unlimited number of transactions not widely accepted 3. transactions low fees high risk in investment in cryptocurrency 4. fast transaction strong volatility nature 5. cross-border currency missing payment cannot be recovered 6. transparency 7. anonymity and privacy 8. high security 9. peer-to-peer network 10. no inflation 4 conclusion cryptocurrency is a virtual currency that depends on cryptography to achieve confidentiality, secrecy, privacy, speed, and low cost of transactions. blockchain is the cryptocurrency underlying technology. blockchain technology provides the cryptocurrency with all the abilities to achieve the decentralization mechanism. the blockchain architecture combines advanced cryptography, distributed consensus mechanisms, and a complex system of incentives and rewards. cryptocurrency comes to international journal of applied sciences and smart technologies volume 4, issue 1, pages 1-20 p-issn 2655-8564, e-issn 2685-9432 18 solve the traditional currency problems to avoid the loss of time, effort, fraud, loss of physical money, and high fees in banking transactions. cryptocurrencies with their underlying technology (blockchain) have several advantages and features that increase their strength and distinction from regular currencies and regular financial transactions such as decentralization, high confidentiality, anonymity, speed of transactions, very low fees, unlimited number of transactions, unrestricted by geography and borders, transparency, protection from inflation, and the peer-to-peer network. while there are these advantages to cryptocurrencies, they have many disadvantages such as the risks of lack of knowledge, the lack of wide acceptance, the high risk of investing in them, their volatile nature, and the inability to return missing payments. it is possible to overcome these disadvantages by raising awareness and training on the use of these currencies on the one hand, and on the other hand, creating regulations and laws that regulate their work and investing in them, which reduces risks, helps spread, and reduces the misuse of their potentials. references [1] m. vejačka, "basic aspects of cryptocurrencies," journal of economy, business and financing, 2(2), 75-83, 2014. [2] a. okhuese, "introducing cryptocurrency," schemas group, 11-12, 2016. [3] c. rose, "the evolution of digital currencies: bitcoin, a cryptocurrency causing a monetary revolution," international business & economics research journal, 14(4), 617-621, august 2015. [4] e. d. a. j. brito, "the new palgrave dictionary of economics," new palgrave dict. econ., march 2020. [5] m. badar, s. shamsi and j. ahmed, "blockchain: concept and emergence," in blockchain applications for secure iot frameworks: technologies shaping the future, bentham science, 19, 2020. [6] b. scott, "how can cryptocurrency and blockchain technology play a role in building social and solidarity finance?," in social and solidarity finance: tensions, opportunities and transformative potential”, 2016. international journal of applied sciences and smart technologies volume 4, issue 1, pages 1-20 p-issn 2655-8564, e-issn 2685-9432 19 [7] p. mulgund, a. sharma, a. srivastava and l. agrawal, "beyond cryptocurrency more to blockchain," cutter business technology journal, 32(11), 8, 2019. [8] j. wild, m. arnold and p. stafford, "technology: banks seek the key to blockchain," financial times, november 2015. [online] available: https://www.ft.com/content/eb1f8256-7b4b-11e5-a1fe 567b37f80b64?segid=0100320#axzz3qk4rcvqp. [accessed: 04 december 2020]. [9] r. houben and a. snyers, cryptocurrencies and blockchain, european parliament, 103, 2018 [10] a. nick and l. hoenig, "consensus mechanisms in blockchain technology," lexology, [online]. available: https://www.lexology.com/library/detail.aspx?g=e30e7d54-3c7f-4ca0-8a22478227a9b5ec. [accessed 02 december 2020]. [11] n. joshi, "8 blockchain consensus mechanisms you should know about," allerin, 23 april 2019. [online]. available: https://www.allerin.com/blog/8blockchain-consensus-mechanisms-you-should-know-about. [accessed 04 december 2020]. [12] h. krishnan, s. saketh and v. vaibhav, "cryptocurrency mining – transition to cloud," international journal of advanced computer science and applications (ijacsa), 6(9), 115-124, 2015. [13] s. nakamoto, "bitcoin: a peer-to-peer electronic cash system". [14] b. pachpande and a. kamble, "study of e-wallet awareness and its usage in mumbai," journal of commerce & management thought, 9(1), 33-45, 2018. [15] p. ankalkoti and s. s g, "a relative study on bitcoin mining," imperial journal of interdisciplinary research (ijir), 3(5), 1757-1761, 2017. [16] s. jokić, a. cvetković, s. adamović, n. ristić and p. spalević, "comparative analysis of cryptocurrency wallets vs traditional wallets," ekonomika, 65(3), 65-75, september 2017. [17] s. singh, april 2018. [online]. available: https://cryptocurrencynews.com/besthardware-wallets/. https://www.ft.com/content/eb1f8256-7b4b-11e5-a1fe https://cryptocurrencynews.com/best-hardware-wallets/ https://cryptocurrencynews.com/best-hardware-wallets/ international journal of applied sciences and smart technologies volume 4, issue 1, pages 1-20 p-issn 2655-8564, e-issn 2685-9432 20 [18] 30 may 2018. [online]. available: https://www.universidadedobitcoin.com.br/olancamento-da-paper-wallet-da-cardano-vem-com-recurso-de-armazenamentooffline. [19] a. rosic, "blockchain consensus: a simple explanation anyone can understand," blockgeeks, [online]. available: https://blockgeeks.com/guides/blockchainconsensus/#what_is_the_byzantine_generals_problem. [accessed 01 december 2020]. [20] brito, j. & castillo, a. bitcoin: a primer for policymakers. 2013. https://www.universidadedobitcoin.com.br/o-lancamento-da-paper-wallet-da-cardano-vem-com-recurso-de-armazenamento-offline https://www.universidadedobitcoin.com.br/o-lancamento-da-paper-wallet-da-cardano-vem-com-recurso-de-armazenamento-offline https://www.universidadedobitcoin.com.br/o-lancamento-da-paper-wallet-da-cardano-vem-com-recurso-de-armazenamento-offline international journal of applied sciences and smart technologies volume 4, issue 2, pages 195 220 p-issn 2655-8564, e-issn 2685-9432 195 this work is licensed under a creative commons attribution 4.0 international license response surface modelling of the mechanical properties of oil palm empty fruit bunch fibre reinforced polyester composites chinwe evangeline kamma1* 1department of sustainable environment and energy systems, middle east technical university, northern cyprus campus 99738 kalkanli guzelyurt via mersin 10 turkey *corresponding author: kammachie@gmail.com (received 02-09-2022; revised 16-09-2022; accepted 21-09-2022) abstract this work presents a systematic approach to evaluate and study the effect of fibre aspect ratio and fibre volume fraction on the tensile strength, ultimate elongation, modulus of elasticity, strength and impact energy of oil palm empty fruit bunch fibre reinforced polyester composites. hand-lay-up technique was used in the fabrication of the composite materials. response surface methodology was used to study the effect of the selected factors on the mechanical properties of oil palm empty fruit bunch fibre reinforced polymer-based composite. the optimum fibre aspect ratio and fibre volume fraction for each mechanical property was determined. from the result of optimization, the maximum value for tensile strength obtained was 12.15n/mm2 at a fibre aspect ratio of 64 and fibre volume fraction of 26%. the maximum value for ultimate elongation was obtained as 1.939% at a fibre aspect ratio of 124 and fibre volume fraction of 50%, the maximum value for modulus of elasticity was obtained as 1509 n/mm2 at a fibre aspect ratio of 124 and 34% volume fraction. the maximum value for toughness was obtained as 0.12 n/mm2 at a fibre aspect ratio of 89 and a volume fraction of 30%. the maximum value of impact energy was obtaine http://creativecommons.org/licenses/by/4.0/ mailto:kammachie@gmail.com international journal of applied sciences and smart technologies volume 4, issue 2, pages 195 220 p-issn 2655-8564, e-issn 2685-9432 196 d as 307.72j/m (5.77ft-lbs/in) at 28% fibre volume fraction and an aspect ratio of 69. the maximum value of impact strength was obtained as 4.57n/mm2 at a 36% volume fraction and an aspect ratio of 64. keywords: polyester composites, response surface model, mechanical properties, oil palm empty fruit bunch 1 introduction polymer matrix composites are composed of a variety of lengths of fibres bonded by a polymer matrix [1]. they are designed such that the mechanical loads to which the structure is subjected in service are supported by the reinforcement. according to [2], composites are materials that consist of two or more chemically and physically different phases separated by a distinct interface. the different systems are combined such that a system is achieved in which useful structural or functional properties are non-attainable by any of the constituents alone. the wake of engineers' uses of composites started when it was discovered that they have great advantages above steel and its alloys such as low weight and higher resistance, high fatigue strength and faster assembly [3, 4]. composites are used extensively as materials in making aircraft, electronic devices, packaging, vehicles, home building, etc. they comprise the matrix and reinforcing materials and their use until now has been more traditional than technical. they have served many useful purposes for a long but the application of the material for the utilization of natural fibres as reinforcement in polymer matrix took place quite recently [3]. these natural fibres have advantages such as low density, low cost, low weight, high toughness, acceptable specific length recovery, biodegradability and enhanced energy recyclability [5, 6]. the fruit bunches which are by-products of oil processing are presently industrial wastes. the oil palm empty fruit bunch can be found littered everywhere in oil palm producing areas of nigeria since the wastes have presently no industrial application. recently, because of the environmental impact of using oil palm empty fruit bunch as international journal of applied sciences and smart technologies volume 4, issue 2, pages 195 220 p-issn 2655-8564, e-issn 2685-9432 197 fuel has been discouraged in nigeria [18]. its handling in the oil mills consumes energy. however, oil palm empty fruit bunch used locally to prepare local delicacies like ukwa (breadfruit), ugba (oil bean salad), abacha (slice cassava, popularly called african salad), and in rare cases now, in the production of local black soap because of the large potassium content of the bunch. this study hopes to help find a place for the usefulness of oil palm empty fruit bunch as fibre reinforcement for composites. several factors affect the performance of natural fibre reinforced polymer composites such as fibre-matrix adhesion properties, fibre length, fibre volume fraction, and fibre aspect ratio [7]. composites are generally a combination of heterogenous materials [21], thus to improve the fibre-matrix interaction and adhesion, the fibre is mercerized to remove certain impurities and reduce the hydrophilic characteristics of the fibre leaving the fibre with a rough surface [4, 8-16]. the mixing procedures, type of compatibilizers, and processing and treatment conditions of fibres and the polymer resin have been shown to affect the quality of interfacial bonding between the fibre and the resin [17, 18]. recent work conducted by athijayamani et al. highlighted the effects of fibre length and content on composite tensile and flexural strength. the study showed that the tensile and flexural strength of a hybrid roselle/sisal polyester composite increased with increased fibre length and fibre content, while the impact strength reduced correspondingly [19]. sapuan et al. studied the mechanical properties of woven banana fibre reinforced epoxy composites and found that the composites can be used for household utilities [20]. the industrial potentials of oil palm empty fruit bunch have not been well addressed in the literature to our knowledge. therefore, the purpose of this work is to evaluate the tensile, and impact strength of short random oil palm empty fruit bunch fibre reinforced polyester composites for application in the automobile industry. the high demand for low density, low cost, high impact resistance, and renewable and biodegradable materials have led to many works and research on fibre-reinforced polymer composites. however, there has not been rife work in modelling and optimization of the mechanical properties of oil palm empty fruit bunch fibre reinforced polyester composites concerning their fibre aspect ratio and volume traction using the response surface model. international journal of applied sciences and smart technologies volume 4, issue 2, pages 195 220 p-issn 2655-8564, e-issn 2685-9432 198 this work hopes to address the above limitation by studying the effect of fibre aspect ratio and fibre volume fraction of oil palm empty fruit bunch fibre reinforced polyester composites on the mechanical properties and modelling the mechanical properties of oil palm empty fruit bunch fibre reinforced polyester composites using surface response technology to determine the optimum fibre aspect ratio and volume fraction for optimum mechanical performance. 2 research methodology a brief description of the materials and methods to be used for the preparation of composites is given in this section. the chemicals to be used for various fibre treatments are discussed. a brief description of the different analytical techniques to be used for the characterization of fibres and composites is also given in this section. 2.1 oil palm empty fruit bunch fibre (rpf) the oil palm empty fruit bunch fibre used in this work was obtained from the eastern part of nigeria where the crop is grown for consumption as well as for commercial use. 2.2 unsaturated polyester resin general purpose-grade unsaturated polyester resin (hsr 8113m), was obtained from nycil industrial chemicals, ota, ogun state nigeria. 2.3 chemicals for fibre modification the sodium hydroxide used for fibre surface modification is of reagent grade and was obtained from new concepts laboratories obinze, imo state, nigeria. 2.4 fibre preparation and surface modification extraction of the opefbf and preparation was carried out as follows: the fruits from the bunch were extracted mechanically leaving behind an empty fruit bunch, which was then retted utilizing tank water retting for about three days, and the fibres so obtained were sun-dried. international journal of applied sciences and smart technologies volume 4, issue 2, pages 195 220 p-issn 2655-8564, e-issn 2685-9432 199 2.4.1. mercerization for preparing randomly oriented oil palm empty fruit bunch fibre composites, the fibres were treated with naoh of 6wt% at 90mins at room temperature. finally, the fibres were repeatedly washed and then dried in air and oven. fibres of an average diameter of 0.41mm and fibre volume fractions of 10%, 20%, and 30% and fibre aspect ratios of 24.39,73.171, and 121.9512 were obtained. 2.4.2 preparation of opefbf-polyester reinforced composites and test specimens randomly oriented opefbpf-polyester composites containing fibres of specific length and fibre volume fraction were prepared by hand lay-up method using a stainless steel sheet female mould with a marble tile male mould. before the composite preparation, the mould surface was polished well and a mould-releasing agent (mirror-glaze) was applied to the surface of the mould. general unsaturated polyester resin (gp) was mixed with 5% by vol. mekp accelerator and 10% by vol. cobalt naphthenate catalyst. the fibre material was then placed in the mould and the resin mixture was poured evenly on it. after which the mould was closed and the excess resin allowed to flow out as a 'flash' by pressing, the pressure was held constant during the curing process at room temperature for 24 hours. the composite sheet will then be post-cured at 80°c for 4 hours. test specimens according to astm standards were cut out from the sheet. 2.5 mechanical property measurements: the standard mechanical properties were determined by the procedures found in astm (american society for testing and materials) standards for plastics. the mechanical property tested for in this work is ultimate tensile strength and impact strength. 2.5.1 tensile properties the tensile properties were tested at the civil engineering laboratory, university of nigeria, nsukka (unn), using a hounsfield monsanto universal tensometer machine. the hounsfield tensometer is a universal testing machine capable of testing metals, international journal of applied sciences and smart technologies volume 4, issue 2, pages 195 220 p-issn 2655-8564, e-issn 2685-9432 200 plastics, textiles, timber, composites, fibres, papers etc. provisions are made for such tests as tensile, compression, flexural or bending, shear, hardness etc. an important feature of the equipment/machine is the ease with which an auto-graphic record can be made. it contains a spring beam with ranges that are readily interchangeable and used in conjunction with special attachments. this enables tests to be performed on a wide variety of materials. like most testing machines, the load is applied at one end and the magnitude of the load is measured at the other. the test piece, held in suitable chucks fixed to the spherically seated nosepieces by the chucks attachment pins, is loaded either by hand or employing a motor-driven unit through a warm gearbox. this causes the operating screw to move to the right and so transmits pull to the test piece. the other end of the test piece is connected via the tension head and bridge to the centre of a precisely ground spring beam. the deflection of this spring-beam, which is supported on rollers, is transmitted through a simple lever system to a mercury piston which displaces mercury in a uniform bore glass tube, thus magnifying the beam deflection and providing an easily read scale of load. the advance of the mercury column is followed manually by the cursor and its attached needle which is used to puncture the graph sheet at frequent intervals; thus recording force. the movement of the worm gear which causes the test piece to elongate is transmitted through a gear train to the recording drum, the rotation of which is proportional to the elongation of the test piece. the resultant graph produced by joining successive punctures shows the load against the cross-head movement which is virtually a true stress/strain diagram from which the modulus of elasticity and tensile strength of the material could be determined. the force will then be recorded and the area of the cross-section's test piece, the mechanical properties are determined such as tensile strength, ultimate elongation and modulus of elasticity. 2.5.2 impact test the impact properties such as impact strength and impact energy of the composite sheet were tested in the civil engineering laboratory, university of nigeria nsukka (unn), using a notch impact-testing machine. force was applied to the composite sheet until it international journal of applied sciences and smart technologies volume 4, issue 2, pages 195 220 p-issn 2655-8564, e-issn 2685-9432 201 fractured, and the impact strength was recorded. the impact strength of composite material is the fracture energy of the material. 3 results and discussions cross-sectional area of composite =60.8mm2 average diameter of fibre = 0.41mm table 3.1. mechanical properties of opefbf composites at varying volume fractions and aspect ratios using shape-preserving-linear-interpolant type of fitting to obtain the youngs’ modulus and toughness. s/n fibre volume fraction (%) fibre aspect ratio ultimate tensile strength (n/mm2) ultimate elongation (mm) youngs’ modulus (n) toughness (n/mm2) 1 10 24.39 7.40 0.0086 362.52 0.032 2 10 73.17 8.31 0.011 514.00 0.048 3 10 121.95 6.58 0.012 133.00 0.047 4 30 24.39 10.77 0.0013 1198.60 0.074 5 30 73.17 13.98 0.022 738.026 0.170 6 30 121.95 6.17 0.020 908.05 0.100 7 50 24.39 4.77 0.013 805.44 0.044 8 50 73.17 4.95 0.013 1028.00 0.040 9 50 121.95 4.11 0.017 370.68 0.048 international journal of applied sciences and smart technologies volume 4, issue 2, pages 195 220 p-issn 2655-8564, e-issn 2685-9432 202 3.1.1 tensile test results table 3.2 analysis of variance result for ultimate tensile strength source sum sq. d.f. mean square f prob>f x1 48.66 2 24.33 6.67 0.053 x2 18.13 2 9.06 2.48 0.19 error 14.60 4 3.65 total 81.38 8 from table 3.2, the anova results of tensile strength, it is seen that the prob>f value for fibre volume fraction (x1) of 0.0532 falls within the acceptable range (≤ 0.05), thus the fibre volume fraction is a significant factor at 95% confidence bound. in comparison with the aspect ratio, fibre volume fraction has a more significant effect on the ultimate tensile strength of the composites. figure 3.1 reveals that there is an increase in the ultimate tensile strength up to an optimum after which there was a corresponding decrease. circular contour lines from the plot imply that factors significantly affect the property tested and that the optimum so obtained is a global optimum so improvement may not be possible. the coefficient of determination (r2) obtained was 0.8207, implying 82% variability of ultimate tensile strength as shown in table 3.3. the ultimate tensile strength of the polyester laminate is 48n/mm2 so from the results it can be seen that the addition of fibres to the resin reduced the tensile properties. however, the young’s modulus which is about 400-1000n/mm2 increased to 1500n/mm2 after the addition of fibres, which implies that added fibres to the polyester resin increased the young’s modulus and reduced the tensile strength. international journal of applied sciences and smart technologies volume 4, issue 2, pages 195 220 p-issn 2655-8564, e-issn 2685-9432 203 figure 3.1. 3d plot of tensile strength versus fibre aspect ratio versus fibre volume fraction table 3.3. numerical results for model fit to experimental data for tensile strength variable coefficient standard error t stat p-val f stat constant 0.42 3.50 0.12 0.91 sse=14.594 x1 0.57 0.21 2.77 0.050 dfe=4 x2 0.13 0.084 1.53 0.20 dfr=4 x12 -0.011 0.0034 -3.17 0.034 f=4.576 x22 -0.0010 0.00057 -1.81 0.14 p-val=0.084953 r2=0.8207 adjr2=0.6413 y=0.42381+0.5725*x1+0.12962*x2-0.010716*x1^2-0.0010277*x2^2 where y is the ultimate tensile strength, x1is the fibre volume fraction and x2 is the fibre aspect ratio. 10 20 30 40 50 0 50 100 150 2 4 6 8 10 12 14 fibre volume fraction (%) x: 26 y: 64 z: 12.15 fibre aspect ratio (m/m) te n s il e s tr e n g th ( m p a ) international journal of applied sciences and smart technologies volume 4, issue 2, pages 195 220 p-issn 2655-8564, e-issn 2685-9432 204 the optimum tensile strength of 12.15 n/mm2 occurred at a fibre volume fraction of 26% and fibre aspect ratio of 64m/m as can be seen from the surface plot. 3.1.2 ultimate elongation table 3.4 analysis of variance result for ultimate elongation source sum sq. d.f. mean square f prob>f x1 0.000030 2 1.50044e-005 0.41 0.6886 x2 0.00013 2 6.59244e-005 1.8 0.2767 error 0.00015 4 3.65828e-005 total 0.00031 8 figure 3.2 3d plot of ultimate elongation vs. fibre aspect ratio and volume fraction from table 3.4 anova results for ultimate elongation, it can be seen that the prob>f values for both fibre volume fraction and fibre aspect ratio were greater than 0.1 (90% confidence) suggesting that both factors do not affect significantly the ultimate elongation. in fig 3.2, the contour lines which are parallel indicate that factors do not have a good interaction between them. the p values from t-stats and f-stats in table 3.5 indicate 10 20 30 40 50 0 50 100 150 0.005 0.01 0.015 0.02 fibre volume fraction (%) x: 50 y: 124 z: 0.01939 fibre aspect ratio (%) u lt im a te e lo n g a ti o n ( m m ) 0.008 0.01 0.012 0.014 0.016 0.018 international journal of applied sciences and smart technologies volume 4, issue 2, pages 195 220 p-issn 2655-8564, e-issn 2685-9432 205 the same as they didn’t fall within the acceptable range of 0.1-0.01(90%-99%) confidence bounds. table 3.5. numerical results for model fit to experimental data for ultimate elongation variable coefficient standard error t stat p-val f stat constant 0.0038 0.0050 0.75 0.48 sse=0.000176 x1 0.000093 0.00011 0.84 0.43 dfe=6 x2 0.000088 0.000045 1.94 0.10 dfr=2 r2=0.4278 adj r2=0.2371 p-val=0.18733 y=0.0038+0.0000933*x1+0.0000881*x2 x1 is fibre volume fraction; x2 is fibre aspect ratio, and y is ultimate elongation the optimum ultimate elongation occurred at 0.01939 at a fibre volume fraction of 50, and fibre aspect ratio of 124 as can be seen from the surface plot. 3.1.3 young’s modulus results table 4.6 analysis of variance result for young’s modulus source sum sq. d.f. mean sq. f prob>f x1 578352.9 2 289176.5 5.26 0.0758 x2 185903 2 92951.5 1.69 0.2934 error 219717.9 4 54929.5 total 983973.8 8 international journal of applied sciences and smart technologies volume 4, issue 2, pages 195 220 p-issn 2655-8564, e-issn 2685-9432 206 figure 3.3. 3d-plot of young modulus versus fibre aspect ratio versus fibre volume fraction. table 3.6 indicated that the fibre volume fraction played a more significant role in affecting young’s modulus of elasticity than fibre aspect ratio based on their prob>f values. from table 3.7 the p-value for t-stats for volume fraction is less than 0.05 which indicates as well how significant a role fibre volume fraction played in the values of young’s modulus of opf composites. fig 3.3, indicated that from the contour lines, there was a good interaction between the factors. r2 of 0.7767 obtained implied a 78% variability of young’s modulus. table 3.7. numerical results for model fit to experimental data for young’s modulus variables coefficients standard error t stat p-val fstat constants -246.41 430.12 -0.57 0.60 sse=2.19e-005 x1 71.85 25.31 2.84 0.047 dfe=4 x2 4.75 10.38 0.46 0.67 dfr=4 x12 -1.031 0.41 -2.49 0.068 f=3.4784 x22 -0.055 0.069 -0.79 0.48 p-val=0.12732 r2=0.7767 adjusted r2=0.5534 y=-246.41+71.848*x1+4.7508*x2-1.0315*x1^2-0.0054756*x2^2; 10 20 30 40 50 0 50 100 150 400 600 800 1000 1200 1400 1600 fibre volume fraction (%) x: 34 y: 124 z: 1509 fibre aspect raio (m/m) y o u n g 's m o d u lu s ( m p a ) international journal of applied sciences and smart technologies volume 4, issue 2, pages 195 220 p-issn 2655-8564, e-issn 2685-9432 207 x1is fibre volume fraction; x2 is the fibre aspect ratio, and y is the young’s modulus optimum occurred at a fibre volume fraction of 34 and a fibre aspect ratio of 124. optimum young’s modulus is obtained as1509n/mm2 =1500n/mm2 3.1.4 toughness table 3.8 analysis of variance result for toughness source sum sq. d.f. mean square f prob>f x1 0.011 2 0.0057 4.39 0.098 x2 0.00081 2 0.0004 0.3 0.75 error 0.0052 4 0.0013 total 0.017 8 figure 3.4 3d-plot of toughness versus fibre aspect ratio versus fibre volume fraction tables 3.8 and 3.9 showed that the aspect ratio played a more significant role in the toughness of the composites based on their p-value from t-stats and the anova table. fig 3.4 revealed a quadratic increase in the toughness until optimum was obtained after 10 20 30 40 50 0 50 100 150 0 0.05 0.1 0.15 0.2 fibre volume fraction (%) x: 30 y: 89 z: 0.1231 fibre aspect ratio (mm/mm) to u g h n e s s international journal of applied sciences and smart technologies volume 4, issue 2, pages 195 220 p-issn 2655-8564, e-issn 2685-9432 208 which there was a corresponding decrease. the circular contour lines in the plot indicated a good interaction between factors. in addition, the circular contour line indicated a global optimum so an improvement on the optimum may not be possible. r2 value of 0.7015 obtained indicated 70% variability in the toughness of the composites. table 3.9. numerical results for model fit to experimental data for toughness variables coefficients standard error t stat p-value fstat constants -0.085 0.066 -1.28 0.27 sse=0.0052198 x1 0.011 0.0039 2.83 0.048 dfe=4 x2 0.0011 0.0016 0.70 0.52 dfr=4 x12 -0.00018 -0.000064 -2.94 0.042 f=2.3501 x22 -0.0000067 0.000011 -0.62 0.57 p-val=0.21412 r2=0.7015 adjr2=0.4030 y=-0.085091+0.011027*x1+0.0011183*x2-0.00018798*x1^2-0.0000066892*x2^2 x1is fibre volume fraction; x2 is fibre aspect ratio. optimum occurred at a fibre volume fraction of 30 and a fibre aspect ratio of 89. optimum toughness is obtained as 123.1n/mm2. for the rational type of fitting, tables 3.11 and 3.12 indicated in correspondence to the values obtained above that the fibre volume fraction played a more significant role than the aspect ratio. however, in comparison, the optimum obtained here is higher than that obtained previously, which indicates a better model fit with rational (linear-quadratic fitting). moreso, from tables 3.13 and 3.14, fibre volume fraction had a more significant effect on young’s modulus of elasticity based on their respective values from anova and t-stats and f-stats. an r2 value of 0.9247 was obtained showing a better fit when compared to that obtained from a linear fit. international journal of applied sciences and smart technologies volume 4, issue 2, pages 195 220 p-issn 2655-8564, e-issn 2685-9432 209 table 3.10 mechanical properties of opefbf composites at varying volume fractions and aspect ratios using rational type of fitting to obtain the youngs’ modulus and toughness. s/n fibre volume fraction (%) fibre aspect ratio ultimate tensile strength (n/mm2) ultimate elongation (mm) youngs’ modulus (n) toughness (n/mm2) 1 10 24.39 7.40 0.0086 523.012 0.032 2 10 73.17 8.31 0.011 616.43 0.048 3 10 121.95 6.58 0.012 884.19 0.048 4 30 24.39 10.77 0.0013 831.36 0.072 5 30 73.17 13.98 0.022 800.35 0.17 6 30 121.95 6.17 0.020 958.075 0.16 7 50 24.39 4.77 0.013 879.98 0.043 8 50 73.17 4.95 0.013 935.54 0.040 9 50 121.95 4.11 0.020 837.20 0.063 for the impact test, tables 3.15 and 3.16 showed that both fibre aspect ratio and fibre volume fraction played a very important role as their p-value are less than 0.05 (95% confidence bounds). fig 3.7 showed a quadratic increase in impact energy up to optimum after which there was a corresponding decrease. the circular contour lines indicated a good interaction between the factors and a global optimum-improvement may not be possible. an r2 value of 0.9373 was obtained which implied a good model fit and a 94% variability of the impact energy. also, from tables 3.17 and 3.18, the p-value indicated that both fibre volume fraction and fibre aspect ratios significantly affect the impact strength of the composites. the 3-d plot of fig 3.8 showed oval contour lines indicating a good interaction between the factors and a global optimum i.e there may not be a possible improvement on it. an r2 value of 0.9117 obtained implies a good model fit and a 91% variability of impact strength. international journal of applied sciences and smart technologies volume 4, issue 2, pages 195 220 p-issn 2655-8564, e-issn 2685-9432 210 3.1.5 toughness table 3.11 analysis of variance result for toughness source sum sq. d.f. mean square f prob>f x1 0.018 2 0.0091 7.13 0.048 x2 0.0028 2 0.0014 1.1 0.415 error 0.0051 4 0.0013 total 0.026 8 figure 3.5 3d-plot of toughness versus fibre aspect ratio versus fibre volume fraction for rational model table 3.12. numerical results for model fit to experimental data for toughness variables coefficients standard error t stat p-value fstat constants -0.11 0.066 -1.69 0.17 sse=0.0052146 x1 0.014 0.0039 3.67 0.021 dfe=4 x2 0.00058 0.0016 0.36 0.73 dfr=4 x12 -0.00024 -0.000063 -3.77 0.020 f=4.1167 x22 -0.00000091 0.000011 -0.086 0.94 p-val=0.099658 r2=0.8046 adjr2=0.61 10 20 30 40 50 0 50 100 150 0 0.05 0.1 0.15 0.2 fibre volume fraction (%) x: 30 y: 124 z: 0.1574 fibre aspect ratio(mm/mm) to u g h n e s s ( m p a ) international journal of applied sciences and smart technologies volume 4, issue 2, pages 195 220 p-issn 2655-8564, e-issn 2685-9432 211 y=-0.11124+0.014174*x1+0.00057795*x2-0.000238*x1^2-0.000000911*x2^2 x1 is fibre volume fraction; x2 is the fibre aspect ratio, and y is the toughness optimum occurred at a fibre volume fraction of 30 and a fibre aspect ratio of 124. optimum toughness is obtained as 157.4n/mm2 3.1.6 young’s modulus results table 3.13 analysis of variance result for young’s modulus source sum sq. d.f. mean sq. f prob>f x1 80333.5 2 40166.8 3.02 0.1585 x2 35795.6 2 17897.8 1.35 0.357 error 53137.4 4 13282.4 total 169266.6 8 figure 3.6 3d-plot of young modulus versus fibre aspect ratio versus fibre volume fraction for rational model 10 20 30 40 50 0 50 100 150 500 1000 1500 fibre volume fraction (%) x: 30 y: 124 z: 1465 fibre aspect ratio (mm/mm) y o u n g 's m o d u lu s ( m p a ) international journal of applied sciences and smart technologies volume 4, issue 2, pages 195 220 p-issn 2655-8564, e-issn 2685-9432 212 table 3.14. numerical results for model fit to experimental data for young’s modulus variables coefficients standard error t stat p-value fstat constants 235.38 140.30 1.68 0.19 sse=12745 x1 25.33 7.45 3.40 0.042 dfe=3 x2 2.45 3.10 0.80 0.480 dfr=5 x1x2 -0.10 0.033 -3.08 0.054 f=7.3688 x12 -0.21 0.12 -1.81 0.17 pval=0.065459 x22 0.015 0.019 0.76 0.50 r2=0.9247 adjr2=0.7992 y=235.3800+25.3260*x1+2.4531*x2-0.1030*x1*x2-0.2088*x1^2+0.04791*x2^2 where y is young’s modulus, and x1 is fibre volume fraction; x2 is the fibre aspect ratio. optimum occurred at a fibre volume fraction of 30 and a fibre aspect ratio of 124. optimum young’s modulus is obtained as 146.5n/mm2 3.1.7 impact energy test table 3.15 analysis of variance results for impact energy source sum sq. d.f. mean sq. f prob>f x1 9.07 2 4.53 19.98 0.0083 x2 4.50 2 2.25 9.92 0.0282 error 0.91 4 0.23 total 14.47 8 international journal of applied sciences and smart technologies volume 4, issue 2, pages 195 220 p-issn 2655-8564, e-issn 2685-9432 213 figure 3.7 3d-plot of impact energy versus fibre aspect ratio and fibre volume fraction table 3.16: numerical results for model fit to experimental data for impact energy variables coefficients standard error t stat p-value fstat constants -0.78 0.87 -0.90 0.42 sse= 0.90738 x1 0.27 0.051 5.28 0.0062 dfe=4 x2 0.081 0.021 3.86 0.018 dfr=4 x12 -0.0049 0.00084 -5.84 0.0043 f= 14.9486 x22 -0.00059 0.00014 -4.21 0.014 p-val= 0.0113 r2= 0.9373 adjr2=0.8746 y=-0.7827+0.2718*x1+0.0814*x2-0.0049*x1^2-0.0005960*x2^2 where y is the impact energy, and x1 is fibre volume fraction; x2 is the fibre aspect ratio. 10 20 30 40 50 0 50 100 150 1 2 3 4 5 6 fibre volume fraction (%) x: 28 y: 69 z: 5.765 fibre aspect ratio (mm/mm) im p a c t e n e rg y ( ft /l b s ) 1.5 2 2.5 3 3.5 4 4.5 5 5.5 international journal of applied sciences and smart technologies volume 4, issue 2, pages 195 220 p-issn 2655-8564, e-issn 2685-9432 214 the optimum 307.72j/m (5.77ft-lbs/in) occurred at a fibre volume fraction of 28, and an aspect ratio of 69 as can be seen from the surface plot. 3.1.8 impact strength test results table 3.17 analysis of variance result for impact strength source sum sq. d.f. mean sq. f prob>f x1 0.23 2 0.12 15.13 0.0136 x2 0.084 2 0.042 5.53 0.0705 error 0.030 4 0.0076 total 0.35 8 figure 3.8 3d-plot of impact strength versus fibre aspect ratio versus fibre volume fraction 10 20 30 40 50 0 50 100 150 1 2 3 4 5 fibre volume fraction (%) x: 36 y: 64 z: 4.574 fibre aspect ratio (mm/mm) im p a c t s tr e n g th (n /c m 3 ) 2 2.5 3 3.5 4 4.5 international journal of applied sciences and smart technologies volume 4, issue 2, pages 195 220 p-issn 2655-8564, e-issn 2685-9432 215 table 3.18. numerical results for model fit to experimental data for impact strength variables coefficients standard error t stat p-value fstat constants -0.20 0.16 -1.27 0.27 sse= 0.0305 x1 0.049 0.0094 5.16 0.0067 dfe=4 x2 0.012 0.0038 3.04 0.038 dfr=4 x12 -0.00084 0.00015 -5.43 0.0056 f= 10.33 x22 -0.00084 0.000025 3.24 0.032 p-val= 0.0220 r2= 0.9117 adjr2=0.8235 y=-0.2029+0.0487*x1+0.118*x2-0.0008375*x1^2-0.0008405*x2^2; where y is the impact strength and x1is fibre volume fraction; x2 is the fibre aspect ratio. the optimum 0.046n/mm2 occurred at a fibre volume fraction of 36, and a fibre aspect ratio of 64 as can be seen from the surface plot. mean energy absorption capacity is given as s 𝑈 = 𝐴𝑚𝑎𝑥 2 × 𝐴0 𝐸𝑚𝑒𝑎𝑛 where amax=maximum breaking stress of the sample a0 = original length of the sample = 160mm =0.16m emean =mean young modulus of elasticity calculated emean = 362.52+514+133+1198.6+738.0256+908.05+805.44+1028+370.6796 9 = 673.15𝑁/𝑚𝑚2 = 673.15*106 n/m2 mean energy absorption capacity is, therefore; 𝑈𝑚 = u1 + u2 + ⋯ + u9 9 um = 0.2j international journal of applied sciences and smart technologies volume 4, issue 2, pages 195 220 p-issn 2655-8564, e-issn 2685-9432 216 4 conclusion this project work is based on modelling the mechanical properties of opr composites was done and the properties tested here include ultimate tensile strength, toughness, young’s modulus, impact strength and impact energy. based on the results, optimization was carried out and the optimum value for each property tested was determined. from this work, it was found that the fibre volume fraction significantly affects the properties more than the aspect ratio. for ultimate tensile strength optimization that the optimum obtained was a global one at fibre volume fraction and aspect ratio of 26% and 64 respectively. the optimum ultimate tensile strength was 12.15n/mm2. for young’s modulus, optimization revealed that at a fibre volume fraction of 34% and aspect ratio of 124, the optimum value for young’s modulus was obtained to be 1509 n/mm2. for toughness, the optimization carried out showed an optimum value of 0.1231 n/mm2 at a fibre volume fraction of 30% and an aspect ratio of 89. for the rational (linearquadratic) type of fitting, the optimum obtained for toughness was 0.16 n/mm2 at a fibre volume fraction of 30% and aspect ratio of 124. also, the optimum obtained here for young’s modulus was found to be 1465 n/mm2 at a 30% volume fraction and an aspect ratio of 124. for impact strength, optimization indicated a global optimum at values of aspect ratio and fibre volume fraction of 69 and 28% respectively. an optimum value of impact strength of 307.72j/m (5.77ft/lbs) was obtained. acknowledgements i want to specially acknowledge engr. dr emmanuel c. osoka for his relentless effort in guiding and supervising this work and ensuring i conduct resourceful research. international journal of applied sciences and smart technologies volume 4, issue 2, pages 195 220 p-issn 2655-8564, e-issn 2685-9432 217 references [1] r. m. guedes and j. xavier “understanding and predicting stiffness in advanced fibre-reinforced polymer (frp) composites for structural applications”, in advanced fibre-reinforced polymer (frp) composites for structural applications (pp. 298-360). woodhead publishing. 2013. [2] j. p. jose, s. thomas, j. kuruvilla, s. k. malhotra, k. goda, m. s. sreekala, “advances in polymer composites: macro-and microcomposites—state of the art, new challenges, and opportunities”, polymer composites, 1, 3-16. 2012. [3] j. kuruvilla, r. d. toledo filho, j. beena, t. sabu, and l. h. carvalho, “a review on sisal fiber reinforced polymer composites”, revista brasileira de engenharia agrícola e ambiental, 3(3), 367-379. 1999. [4] x. li, , l. g. tabiland, s. panigrahi, “chemical treatments of natural fibre for use in natural fibre-reinforced composites: a review, journal of polymers and the environment”, 15(1), 25-33. 2007. [5] s.m. lee, d. cho, w.h. park, s.g. lee, s.o. han, and l.t. drzal, “novel silk/poly (butylene succinate) biocomposites: the effects of short fibre content on their mechanical and thermal properties”, composites science and technology, 65, 647-657. 2005. [6] h. y. cheung, m. p. ho, k. t. lau, f. cardona, and d. hui, “natural fibrereinforced composites for bioengineering and environmental engineering applications”, composites part b: engineering, 40(7), 655-663. 2009. [7] e. c. osoka and o. d. onukwuli, the optimum condition for mercerization of oil palm empty fruit bunch fibre, international journal of innovative research in computer science and technology, 3(4), 50-56. 2015. [8] d. jones, g. o. ormondroyd, s. f. curling, c. m. popescu, and m. c. popescu, “chemical compositions of natural fibres. in advanced high strength natural fibre composites in construction”, (pp. 23-58), woodhead publishing, 2017. [9] m. y. hashim, a. m. amin, o. m. f. marwah, m. h. othman, m. r. m. yunus, and n. c. huat, “the effect of alkali treatment under various conditions on international journal of applied sciences and smart technologies volume 4, issue 2, pages 195 220 p-issn 2655-8564, e-issn 2685-9432 218 physical properties of kenaf fibre”, in journal of physics: conference series 914(1), 012030. iop publishing, 2017. [10] a. k. mohanty, m. misra, and l. t. drzal, “surface modifications of natural fibres and performance of the resulting biocomposites: an overview”, composite interfaces, 8(5), 313-343. 2001 [11] a. bismarck, a. k. mohanty, i. aranberri-askargorta, s. czapla, m. misra, g. hinrichsen and j. springer. “surface characterization of natural fibres; surface properties and the water up-take behavior of modified sisal and coir fibres”, green chemistry, 3(2), 100-107. 2001 [12] l. a. pothan, j. george, & s. thomas, “effect of fibre surface treatments on the fibre–matrix interaction in banana fibre reinforced polyester composites”, composite interfaces, 9(4), 335-353. 2002. [13] n. t. phuong, c. sollogoub and a. guinault, “relationship between fibre chemical treatment and properties of recycled pp/bamboo fibre composites”, journal of reinforced plastics and composites, 29(21), 32443256. 2010 [14] t., padmavathi, s. v. naidu & r. m. v. g. k. rao. “studies on mechanical behavior of surface modified sisal fibre–epoxy composites”, journal of reinforced plastics and composites, 31(8), 519-532. 2012. [15] petinakis, e., yu, l., simon, g., & dean, k. (2013). “natural fibre biocomposites incorporating poly (lactic acid)”, fibre reinforced polymers-the technology applied for concrete repair, 41-59. 2013. [16] o. faruk, a. k. bledzki, h. p. fink & m. sain, “biocomposites reinforced with natural fibres: 2000–2010”, progress in polymer science, 37(11), 1552-1596. 2012. [17] m. y. hashim, a. zaidi, a. mujahid, & s. ariffin, “plant fiber reinforced polymer matrix composite: a discussion on composite fabrication and characterization technique”. in seminar to faculty of civil and environmental engineering (fkaas), universiti tun hussein onn malaysia (uthm). 2012. international journal of applied sciences and smart technologies volume 4, issue 2, pages 195 220 p-issn 2655-8564, e-issn 2685-9432 219 [18] c. m. ewulonu and , i. o. igwe. “properties of oil palm empty fruit bunch fibre filled high density polyethylene”. international journal of engineering and technology, 3(6), 2012. [19] a., athijayamani, m. thiruchitrambalam, v. manikandan and b. pazhanivel, “mechanical properties of natural fibers reinforced polyester hybrid composite”, international journal of plastics technology, 14(1), 104-116. 2010. [20] s. m. sapuan, a. leenie, m. harimi, and y. k. beng, “mechanical properties of woven banana fibre reinforced epoxy composites”. materials & design, 27(8), 689-693. 2006. [21] f. k. a. nugraha, “shrinkage of biocomposite material specimens [ha/bioplastic/serisin] printed using a 3d printer using the taguchi method”, international journal of applied sciences and smart technologies, 4(1), 89-96. 2022. international journal of applied sciences and smart technologies volume 4, issue 2, pages 195 220 p-issn 2655-8564, e-issn 2685-9432 220 this page intentionally left blank international journal of applied sciences and smart technologies international journal of applied sciences and smart technologies volume 3, issue 1, pages 125–130 p-issn 2655-8564, e-issn 2685-9432 125 conceptual design of modular chassis jig of student competition car heryoga winarbawa1,* 1department of mechanical engineering, faculty of science and technology, sanata dharma university, yogyakarta, indonesia *corresponding author: winarbawa@gmail.com (received 28-05-2021; revised 16-06-2021; accepted 16-06-2021) abstract chassis jig is needed to ensure that the welded chassis components does not warp or deform during welding process. through concept screening and concept scoring, multiple design of chassis jigs is narrowed down to next development process. this study aims to design a chassis jig for the fabrication of student car competition chassis. the desired result of this design process is chassis jig with the ability to manufacture a wide variety of student competition car chassis. keywords: car, chassis, jig, fixture 1 introduction students of mechanical engineering programs are often challenged to study implementations. student competition is one of study implementations. those competitions drive students to build something based on what they learn in class. formula student (fs), shell eco marathon (sem), kompetisi mobil hemat energi (kmhe), and kompetisi mobil listrik indonesia (kmli) are examples where student need to build cars to compete. international journal of applied sciences and smart technologies volume 3, issue 1, pages 125–130 p-issn 2655-8564, e-issn 2685-9432 126 car chassis is the important thing to ensure vehicle stability and driver/passenger safety. there are several types of chassis, ladder frame, space frame, and monocoque. a lot of student teams apply space frame chassis for their cars. space frame chassis is chosen due to simpler manufacturability and cost effectiveness. space frame chassis are manufactured by welding some of tubing. to ensure accuracy, space frame chassis materials are welded on chassis jig and several fixtures. welding fixtures are typically the most common devices used to align and retain the various pieces for welding [1]. some chassis jig on the market are using steel as their table top. steel table top are usually coated with other material to prevent rust. the scope of this study was limited to accommodate competition car chassis welding process. this study summarizes the design of chassis jig that accommodate several kind of chassis that suite for each competition. 2 research methodology this study applied three stages of process as shown at figure 1. the first stage is to collect all requirement measurement, such as, length, width, and height of car chassis based on technical rules for each competition. as the second stage, we recall that concept selection is the process of evaluating concepts with respect to customer needs and other criteria, comparing the relative strengths and weaknesses of the concepts, and selecting one or more concepts for further investigation, testing, or development [2]. material selection criteria are limited to ease of manufacturing and assembly process, and rust resistance properties. the third stage is when chassis jig was design and some feature were added. figure 1. study process collect data concept selection design & features international journal of applied sciences and smart technologies volume 3, issue 1, pages 125–130 p-issn 2655-8564, e-issn 2685-9432 127 collect data. car dimension can be found at technical data of each competition [3-5]. the kmhe competition adopts sem environment, so basically the technical rules for car dimension are the same. maximum dimension requirement are selected so the chassis that need to be manufactured doesn’t break the competition rules. the closest car chassis cad design for each type of competition are also collected to simulate availability on chassis jig top. concept selection. a chassis jig is constructed by table top and some legs. steel tube, hollow aluminum tube, and aluminum extrusion are the most common materials to build such thing. concept a is bulky, heavy duty chassis jig constructed by square hollow steel as frame and thick steel sheet as table top. concept b is knock-down slatted steel sheet as top, frame and some legs. concept c is constructed by all aluminum extrusion. concept d has steel sheet with welded ribs as table top, and bent steel sheet that constructed as frame. concept e is assembled by welded square hollow steel and aluminum extrusion as table top. all of those concepts are narrowed down through concept screening which tabulated in table 1. table 1. concept screening concepts selection criteria a b c d e weight 0 + rust resistant 0 0 0 0 easy to assemble 0 0 + + manufacturing cost 0 + + fixture modularity + + 0 0 0 rigidity + + 0 + easy to move + 0 + sum +’s 2 3 0 4 3 sum 0’s 0 2 7 2 2 sum –‘s 5 2 0 1 2 net score -3 1 0 3 1 rank 5 3 4 1 2 continue? no yes combine yes combine through concept screening, concept variation has been reduced to three concepts, concept b, concept d, and combination of concept c and e, thus called concept ce. further investigated issue needs to be clarified before chosen final concept. selection criteria are weighted to increase the sensitivity of concept determination as shown in table 2. international journal of applied sciences and smart technologies volume 3, issue 1, pages 125–130 p-issn 2655-8564, e-issn 2685-9432 128 table 2. concept scoring concepts b ce d selection criteria w e ig h t r a ti n g w e ig h te d s c o re r a ti n g w e ig h te d s c o re r a ti n g w e ig h te d s c o re weight 10% 2 0.2 4 0.4 5 0.5 rust resistant 15% 3 0.45 4 0.6 3 0.45 easy to assemble 10% 3 0.3 2 0.2 3 0.3 manufacturing cost 25% 1 0.25 3 0.75 4 1 fixture modularity 15% 4 0.6 4 0.6 3 0.45 rigidity 20% 4 0.8 4 0.8 1 0.2 easy to move 5% 4 0.2 4 0.2 4 0.2 total score 2.8 3.55 3.1 rank 3 1 2 continue no develop no the total score for each concept is the sum of the weighted scores, and formulated as: 𝑆𝑗 = ∑ 𝑟𝑖𝑗 𝑤𝑖 𝑛 𝑖=1 . (1) here, 𝑟𝑖𝑗 is the raw rating of concept j for the ith criterion, 𝑤𝑖 represents the weighting for ith criterion, 𝑛 is the number of criteria, and 𝑆𝑗 denotes the total score of concept j. 3 results and discussion chosen concept is combination of concept c and e. concept ce is constructed by welded square hollow steel as bottom part of frame and slatted arrangement of aluminum extrusion as table top as shown in figure 2a. from bottom part of frame to table top is raised by another aluminum extrusion. that configuration provide better stability as center gravity is lower. the lower part also useful for storing jigs/fixtures, welding equipment, and power tools that required during chassis fabrication. figure 2b shows chassis jig with sem prototype car chassis on top of it. for fs car chassis, some additional jig members were added to support front and back section of car chassis as shown in figure 2c. fixtures for holding chassis materials will be design as future work. international journal of applied sciences and smart technologies volume 3, issue 1, pages 125–130 p-issn 2655-8564, e-issn 2685-9432 129 (a) (b) (c) figure 2. some illustrations of chassis jigs: (a) chassis jig, (b) chassis jig with sem prototype car chassis, (c) chassis jig with fs car chassis 4 conclusion concept ce has been concluded as best concept after several steps. the excellence of concept ce amongst others are: • lightweight: using aluminum extrusion as the most component are reducing overall weight • rust resistant: aluminum is already rust resistant since it already provides oxide film all over the surface. • easy to assemble: aluminum extrusion provides tracks that has standardized dimensions and their own accessories such as nuts, bolts and connectors. international journal of applied sciences and smart technologies volume 3, issue 1, pages 125–130 p-issn 2655-8564, e-issn 2685-9432 130 • low manufacturing cost: since it already has tracks and accessories, fabricating process and assembly is just simply as cut and connect. it does not need complex manufacturing process on many machines. • modular fixture: lower fixture design are mainly based on aluminum extrusion track design, and the top one are following work piece design, that is, aluminum hollow tube for chassis. • rigid enough: aluminum extrusion cross-section is already designed to maintain rigidity along its length. bigger cross-sectional area provides minimum deflection and maximum rigidity when they are made into a structure. • easy to move: since it is already lightweight because half part is using aluminum material, the movement also help by castor wheel. note that because this concept using aluminum as top, so it does not provide impact protection. top surface is only used as planar reference and placing fixtures. references [1] e. g. hoffman, jig and fixture, 5th edition, clifton park, ny: delmar, 2004. [2] k.t. urlich and s.d epingger, product design and development, new york, ny: mcgraw-hill education, 2016. [3] "formula student," [online]. available: https://www.imeche.org/docs/defaultsource/1-oscar/formula-student/2021/forms/fs2020-rules.zip?sfvrsn=2. [accessed 24 march 2021]. [4] "shell eco marathon," [online]. available: https://base.makethefuture.shell/en_gb/service/api/home/shell-eco-marathon/globalrules/_jcr_content/root/content/document_listing/items/download_1733386233.strea m/1598973488368/0e057b48fe5e2adc044ac860f83ebf26f8ccd5f9/shell-ecomarathon-2020-official-rul. [accessed 24 march 2021]. [5] "kompetisi mobil listrik indonesia," [online]. available: http://kmli.polban.ac.id/component/phocadownload/category/1-panduanlomba.html?download=6:panduan-kmli-xi-2019. [accessed 24 march 2021]. international journal of applied sciences and smart technologies international journal of applied sciences and smart technologies volume 1, issue 1, pages 1–10 issn 2655-8564 1 indian traffic signboard recognition and driver alert system using machine learning shubham yadav*, anuj patwa, saiprasad rane, chhaya narvekar xavier institute of engineering, mahim causeway, mahim (west), mumbai, maharashtra 400016, india *corresponding author: shubham.yadav.5497@gmail.com (received 20-04-2019; revised 14-05-2019; accepted 14-05-2019) abstract sign board recognition and driver alert system which has a number of important application areas that include advance driver assistance systems, road surveying and autonomous vehicles. this system uses image processing technique to isolate relevant data which is captured from the real time streaming video. the proposed method is broadly divided in five part data collection, data processing, data classification, training and testing. system uses variety of image processing techniques to enhance the image quality and to remove non-informational pixel, and detecting edges. feature extracter are used to find the features of image. machine learning algorithm support vector machine (svm) is used to classify the images based on their features. if features of sign that are captured from the video matches with the trained traffic signs then it will generate the voice signal to alert the driver. in india there are different traffic sign board and they are classified into three categories: regulatory sign, cautionary sign, informational sign. these indian signs have four different shapes and eight different colors. the proposed system is trained for ten different types of sign. in each category more than a thousand sample images are used to train the network. international journal of applied sciences and smart technologies volume 1, issue 1, pages 1–10 issn 2655-8564 2 keywords: image processing, signboard detection, svm algorithm, raspberry pi 1 introduction recognition of signboard correctly at the right time and at the right place is very important for drivers to insure themselves and their passengers’ safe journey. however, sometimes, due to the change of weather conditions or viewing angles, signs are difficult to be seen until it is too late. now a days increases in computing power have brought computer vision to applications. on the other hand, the increase in traffic accidents accompanying the increasing amount of traffic has become a serious problem for society. the road accidents is particularly high under special road conditions, such as at the entrance to a one-way street, sharp curves, and intersections. one possible countermeasure is to install ”stop”, ”no left turn” and other signs in order to notify the driver of the road conditions and other traffic information. anyhow, there remains the possibility that the driver who is depending on his/her state of mind, fail to notice the sign while driving, a serious accident is possible if the driver fails to notice a sign such as ”do not enter”, ”stop” etc [1]. it is possible that accidents can be prevented by utilizing an automatic sign board recognition system to provide traffic information to the driver, including information about the road in front of the vehicle. signs also have distinct shapes like circles, triangles, rectangles and octagons. these systems assist drivers to drive safely. while driving the vehicle the driver gets the alert message like go slow, ahead is speed breaker. there are many detection techniques developed in recent days for traffic light and sign board detection. a system which involves detection process of traffic sign and sending the alert message does not exist. so keeping attention towards different traffic signs are difficult task for every drivers. so we proposed a system that can be used to detect traffic sign board. traffic signs detection is an important part of driver assistant systems. these can be designed in different colors or shapes, in high contrast background. so in order to capture these images, traffic signs are oriented upright and facing camera. hence there will be geometric and rotational distortions. in these cases accuracy is a key consideration. any miss classified or undetected sign and lights will international journal of applied sciences and smart technologies volume 1, issue 1, pages 1–10 issn 2655-8564 3 produce adverse impacts on system. the basic idea of proposed system is to provide alertness to the driver about the presence of traffic sign at a particular distance apart. this system will be able to detect, recognize and infer the road traffic signs would be a prodigious help to the driver. the objective of an automatic road signs recognition system is to detect and classify one or more road signs from within live color images captured by a camera. the color of a traffic sign is easily distinguishable from the colors of the environment. in this we provide alertness to the driver about the presence of signboard at a particular distance apart. the system provides the driver with real time information from road signs, which consist the most important and challenging tasks. next generate an voice warning to the driver in advance of any danger. this warning then allows the driver to take appropriate corrective decisions in order to mitigate or completely avoid the event. first, it is necessary to select the hardware equipment to solve this problem. the second stage is based on color processing or object detection method based on rapid color changes. image processing technology is mostly used for the identification of the signboards. the alertness to the driver is given as audio output. 2 review of literature there are many algorithms and methodologies have been proposed for road traffic sign detection [2-6]. reza azad proposed the system with iranian traffic signs with detection and recognition and the letters are segmented with svm classifier. another method has also been proposed by gauri tagunde based on color and shape features by detection and recognition approaches have been proposed to deal with sign board detection and recognition. most of these systems typically involve two tasks finding the locations and sizes of sign board in natural scene images (sign board detection) and recognizing the detected signs board to interpret its meaning (sign board recognition). being designed with regular shapes and conspicuous colours, sign board attract human driver attention so as to be easily captured by human drivers. mohammad amen proposes the system with ycbcr colour space and shape based filtering the detected traffic signs are tracked and recognized using interest point descriptors. the algorithm is robust and can detect signs even when the traffic sign board is rotated. the traffic sign international journal of applied sciences and smart technologies volume 1, issue 1, pages 1–10 issn 2655-8564 4 template database can be updated easily. the method is aimed at achieving high accuracy in recognizing traffic signs at real-time, with a low computational cost. reduced computational complexity of the algorithm enables the implementation of the proposed method in embedded systems for driver assistance. in the case of traffic sign detection majority of system make use of colour information as a method for segmenting images. the performance of colour based road sign detection is often reduced in scenes with strong illumination, poor lightning or adverse weather conditions. the vast majority of the existing systems consist of hand label real images which are repetitive time consuming and error prone process. information about traffic symbols, such as shape and colour, can be used to place traffic symbols into specific groups; however there are several factors that can hinder effective detection and recognition of traffic signs. these factors include variations in illumination occlusion of signs, motion blur, and weather –worn deterioration of signs. road scene is also generally much cluttered and contains many strong geometric shapes that could easily be misclassified as road signs. accuracy is a key consideration because even one misclassified or detected sign could have an adverse impact on the driver. 3 design many times we see that many road accidents take place. this can be due to driver’s ignorance of traffic sign board and road signs. as the road traffic is increasing day by day there is a necessity of following the traffic rules with proper discipline. traffic signboard detection is an important part of driver assistant systems. the basic idea of proposed system is to provide real time voice signal to the driver about the presence of traffic sign board at a particular distance apart. the project is divided in to two part: 1. training 2. implementation the system provides the driver with real time information from road sign board, which consist the most important and challenging tasks. it generates an voice signal to the driver in advance of any danger. this warning allows the driver to take some appropriate actions in order to avoid the accident. international journal of applied sciences and smart technologies volume 1, issue 1, pages 1–10 issn 2655-8564 5 figure 1. flow diagram of the proposed system the alertness to the driver is given as a voice signal through speaker as an output. there are two method to classify the images in machine learning, convolution neural network (cnn) and support vector machine (svm). the proposed system uses support vector machine (svm) for classification. 4 working of the system support vector machine is a supervised machine learning algorithm which is also known as the linear classifier mostly used for the classification purpose. the main advantage of svm algorithm is,its strong ability to classify any data. when the dataset have a clear classification boundary in that situation svm is best option than other available method. svm is considered as one of the best classifier and it is simple to use and understand than other classifier. the proposed system uses 90% of sample data for training and 10% for testing. the working of the system is broadly divided in three phase: 1. color segmentation 2. shape classification 3. recognition camera capture image from video processing images applying svm for classification & recognition voice alert message through speaker to driver international journal of applied sciences and smart technologies volume 1, issue 1, pages 1–10 issn 2655-8564 6 phase 1: color segmentation in this phase candidate blob are extracted from the input image. color segmentation phase is important phase because color every traffic sign are such that they appears different from the surrounding environment. hsi color space of image processing techniques used for segmenting the color. this is basically detection where region of interest is identified by using image processing techniques. using the image processing technique system creates contours on each video frame and finds ellipses and circle among those contours. detection strategy includes, increasing the contrast of video frame, removing unnecessary colors like green with hsv color range, using laplacian of gaussian to display the boarder of the candidate blob, making contour by binarization and detecting the ellips like and circle like contours. phase 2: shape classification the candidate blob that are extracted from the video frame of the segmentation phase are now need to classify. the classification of these candidate are based on the shape. for classification of the candidate blob based on the shape linear svm is used. there are two major task involved in shape classification. 1. shape feature extraction : first step in shape classification is to make feature vectors for the input to the linear svm. many methods have been proposed for obtaining the feature vectors (see [7, 8]). in this work, we have used distance to border vector (dtb) [8]. dtb stands for the distance of the blob from the external edge of the blob to its bounding box. 2. training and testing using linear svm : once the feature vector for the roi is created then the classification is initiated. for classification of the shape eight linear svm is used. svm is machine learning algorithm which can classify the data in different group. it is based on concept of decision plane where the training data is mapped to higher dimensional space and separated by plane defining two or more classes of data. the extensive introduction can befound in [9, 10]. international journal of applied sciences and smart technologies volume 1, issue 1, pages 1–10 issn 2655-8564 7 figure 2. shows possible hyper planes [11] the proposed system is trained for ten traffic signs and the image of signs we can see in fg.3. in this 90% of the sample data is used to trained the system and 10% of the sample data is used for testing the system. figure 3. trained traffic signs phase 3: recognition once the shape classification process is done then the next step is sending the blob to pattern recognition stage. to perform the recognition of pattern the radial basis function (rbf) is used. in this phase non linear svm is used to recognized. in this classified blob is first converted into gray scale image then applying feature extracter to extract the features of the blob. non linear svm is used to recognition purpose in which extracted features are compared with all blob that are having the same shape and color. if the blob features matches with the trained signs features then system generates the international journal of applied sciences and smart technologies volume 1, issue 1, pages 1–10 issn 2655-8564 8 alert message calling the label of that class. the alert message is given in the form voice signal through speaker. 5 working result of proposed system the proposed system recognized almost all the traffic sign correctly when the traffic sign are stable while accuracy of the system is decreases while in motion. the environment and light also have the adverse effect on the system. sometimes the images which is captured from the real time streaming video have high contrast or low contrast in such cases system were not able to detect the traffic sign. so performance of the propose system in different environment not well but if the system have sample images with that environment then it works well. so we can say that accuracy of the system depend on number of sample images for a particular sign in that environment. due to text to speech converter api some time the voice signal are delayed by few second. 6 conclusions the performance of the proposed system is quite good when system move slowly keeping signboard stationary but performance of the system while moved fast are not as per expectation. environment and light also affect the system performance. sometime due to text to speech converter api alert signal was getting delay. according to statistical report 3 death happens every 10 minutes due to road accident in india. on successful implementation of this project we expect to drastic reduction in road accident references [1] g. revathi and g. balakrishnan, “indian sign board recognition using image processing techniques,” international journal of advanced research in biology engineering science and technology, 2 (15), 326−330, 2016. [2] r. azad, b. azad, and i. t. kazerooni, “optimized method for iranian road signs detection and recognition system,” international journal of research in computer science, 4 (1), 19−26, 2014. [3] k. m. sumi and k. m. n. arun, “detection and recognition of road signs,” international journal of computer applications, 160 (3), 1−5, 2017. international journal of applied sciences and smart technologies volume 1, issue 1, pages 1–10 issn 2655-8564 9 [4] a. p. t. agnes, c. a. aiswarya, a. augustine, a. s. kumar, and n. aswathy, “real time traffic light and sign board detection,” international journal of engineering research and general science, 5 (3), 50−57, 2017. [5] w. zhang, “shift-invariant pattern recognition neural network and its optical architecture,” proceedings of annual conference of the japan society of applied physics, p. 734, 1988. [6] https://medium.com/@raghavprabhu/understanding-of-convolutional-neuralnetwork-cnn-deep-learning-99760835f148 (accessed on 14-05-2019). [7] p. g. jiménez, h. g. moreno, p. siegmann, s. l. arroyo, and s. m. bascón, “traffic sign shape classification based on support vector machines and the fft of the signature of blobs,” proceedings of the 2007 ieee intelligent vehicles symposium, 375−380, istanbul, 13-15 june 2007. [8] s. l. arroyo, p. g. jiménez, r. m. bascón, f. l. ferreras, and s. m. bascón, “traffic sign shape classification evaluation i: svm using distance to borders,” proceedings of ieee intelligent vehicles symposium, 557−562, las vegas, june 2005. [9] s. abe, support vector machines for pattern classification, springer, london, 2005. [10] c. c. chang and c. j. lin, libsvm: a library for support vector machines, 2001, https://www.csie.ntu.edu.tw/~cjlin/papers/libsvm.pdf (accessed on 14-05-2019). [11] https://towardsdatascience.com/support-vector-machine-introduction-to-machinelearning-algorithms-934a444fca47 (accessed on 14-05-2019). https://medium.com/@raghavprabhu/understanding-of-convolutional-neural-network-cnn-deep-learning-99760835f148 https://medium.com/@raghavprabhu/understanding-of-convolutional-neural-network-cnn-deep-learning-99760835f148 https://towardsdatascience.com/support-vector-machine-introduction-to-machine-learning-algorithms-934a444fca47 https://towardsdatascience.com/support-vector-machine-introduction-to-machine-learning-algorithms-934a444fca47 international journal of applied sciences and smart technologies volume 1, issue 1, pages 1–10 issn 2655-8564 10 this page intentionally left blank international journal of applied sciences and smart technologies volume 4, issue 1, pages 89–96 p-issn 2655-8564, e-issn 2685-9432 89 this work is licensed under a creative commons attribution 4.0 international license shrinkage of biocomposite material specimens [ha/bioplastic/serisin] printed using a 3d printer using the taguchi method felix krisna aji nugraha1,* 1department of mechanical design technology, sanata dharma university, yogyakarta, indonesia *corresponding author: felix@pmsd.ac.id (received 15-05-2022; revised 25-05-2022; accepted 26-05-2022) abstract the fused deposition modeling in the rapid prototyping technique was modified using a paste-shaped material with biocomposite material. one of the correction factors for the printed test specimen results is shrinkage. the paste material used is hydroxyapatite [ca5(po4)6(oh)2] and tapioca bioplastic. besides these materials, sericin is added, which is produced from extracts from silkworm cocoons. the composition of the biocomposite paste used with the ratio of hydroxyapatite and bioplastic was 40:50, 50:50, 60:40, by adding 0.3% sericin to the hydroxyapatite solution. the parameters used in the printing process of the test specimens are the perimeter speed of 60 mm/s, the infill speed of 10 mm/s, and the layer height of 0.45 mm. the design in this test has dimensions of 100mm long, 25mm wide, and 3mm thick. the optimal shrinkage of the test specimens was analyzed using the taguchi method. specimen printing is done by using additive manufacturing method. the process is carried out using a portabee three-dimensional printing machine that uses a fdm system modified to an aqueous-based extrusion fabrication (abef) system. the results obtained that the optimum international journal of applied sciences and smart technologies volume 4, issue 1, pages 89–96 p-issn 2655-8564, e-issn 2685-9432 90 composition for shrinkage of the biocomposite material was 50:50 with the addition of 0.3% sericin to the hydroxyapatite solution. keywords: shrinkage, biocomposite, three-dimensional printer, taguchi 1 introduction many researches on biocomposite materials have been carried out. the research used examined the composition of biocomposite materials consisting of [hydroxyapatite/bioplastic/sericin]. the materials in this study were hydroxyapatite from sigma aldrich, tapioca starch bioplastic, and sericin extracted from caterpillar cocoons (bombyx morii). biomaterials can be divided into two types, namely natural and artificial biomaterials. examples of natural biomaterials are collagen, elastin, and chitin, while artificial biomaterials are made of metals, polymers, ceramics, and composites [1]. the highest biocompatibility properties of ceramic biomaterials compared to other biomaterials. ceramic materials in biomaterials are known as bioceramics [2]. composite is a material formed from a non-homogeneous combination of two or more constituent materials. due to the different characteristics of the material, it will produce a new material (composite) that has different properties from the constituent materials [3]. in the study of composite scaffold specimens using biocomposite materials with nanohydroxyapatite (nha) and tapioca flour (bp) bioplastics. the ratio of nha/bp biocomposite materials varied at 0, 20, 40, 60, and 80% (w/w), respectively. the tensile strength of the scaffold material was tested with the diameter tensile strength (dts) test, the highest tensile strength of the nanobiocomposite material was obtained with an nha/bp ratio of 60% (w/w) [4]. the addition of 2.7% camphorquinone and the use of ultraviolet light on [hydroxyapatite/bioplastic] biocomposite with a ratio of ha/bp = 47.86%/52% resulted in the fastest solidification time = 408 seconds, and resulted in a dts test for 2576.74 kpa [5]. zirconia content at 40% and higher can increase the porosity of the biocomposite material, then cause a decrease of 0.039 mpa in the compressive strength of the hydroxyapatite-zirconia [6]. international journal of applied sciences and smart technologies volume 4, issue 1, pages 89–96 p-issn 2655-8564, e-issn 2685-9432 91 rapid prototyping is a method of rapidly creating three-dimensional objects from digital data. rapid prototypes are different from conventional manufacturing processes which have the principle of making a product with a workpiece by using a cutting tool to get a three-dimensional slice of an object that fits the desired shape, instead using an additive principle that adds material to the already formed layer. because it uses the additive principle, rapid prototyping is also known as additive manufacturing [7]. in general, the working principle of fdm is based on the deposition of melted thermoplastic filaments onto the workbench to create a layer-by-layer structure with the movement of the extrusion nozzle on the x, y, and z axes [8] basically, abef has a similar working principle to fdm. however, the abef method uses a material in the form of a semi-liquid paste for the construction of three-dimensional objects. the paste material is extruded from the container to the nozzle using the screw extrusion principle [9]. in this study, modifications were made to a three-dimensional printer-portabee machine to modify its working principle from fdm to abef system. modification of the abef system by using a single screw extruder. 2 research methodology the ingredients in this study were hydroxyapatite (catalog no. 04328, sigma-aldrich) and commercial tapioca flour. sericin is extracted from the cocoons of the silkworm (bomix morii) by hydrothermal processing. the citric acid and glycerin materials used are technical grade materials. biocomposite material is made by wet process or using distilled water. the hydroxyapatite suspension with a percentage of 20% (w/v) was prepared by dispersing ha powder in distilled water with a percentage of citric acid of 10% (w/w). citric acid is used as a dispersant. the suspension material was mixed using a rotation of 1000 rpm, at a temperature of 250c for 20 hours to obtain a homogeneous suspension. a suspension of 20% (w/v) tapioca flour was prepared by dispersing tapioca flour powder in distilled water with 3.25% (v/v) glycerin. the tapioca flour suspension was transformed into bioplastic by stirring at 600 rpm at 500c for 15 minutes. the biocomposite paste material was carried out by mixing the ha suspension with bioplastic international journal of applied sciences and smart technologies volume 4, issue 1, pages 89–96 p-issn 2655-8564, e-issn 2685-9432 92 at various mass percent ratios (w/w), as shown in table 1. sericin was added to each composition in a ratio of 0.3% (w/w) to the mass of the ha suspension. table 1. variations in the composition of biocomposite materials level factor % mass ratio (w/w) ha suspension bioplastic 1 40 60 2 50 50 3 60 40 the creation of a three-dimensional image of a specimen with dimensions of 100 mm x 25 mm x 3mm was created using the solidworks software, as shown in figure 1. a three-dimensional image file is an image saved with the 'stl' format type. . file format derived from 'stl'. converted into g-code programming language using slic3r software. the results of the g-code program language are entered into a three-dimensional printing machine, and the parameter settings for the material filler setting on the portabee machine are 10 mm/s, print speed 60 mm/s, and layer height 0.45 mm. this biocomposite paste material is filled into the working material container of the portabee three-dimensional printing machine. by using a three-dimensional printing portable machine that has been modified, the test specimen printouts are obtained by three-dimensional printing using the abef system. the process of printing specimens using a modified three-dimensional printer-machine is shown in figure 2. figure 1. test specimen design drawing international journal of applied sciences and smart technologies volume 4, issue 1, pages 89–96 p-issn 2655-8564, e-issn 2685-9432 93 figure 2. the process of printing test specimens using a three-dimensional printing machine the dimensions of the test specimens were measured using a digital caliper with an accuracy of 0.01mm. after the results of measuring the dimensions of the test specimens in the form of length, width, and thickness are obtained, the measurement results are then collected. the results of the specimen measurement, the dimension value is the average of the dimension values measured at three different measuring points. measurement of the dimensions of the specimen is illustrated in figure 3. the shrinkage of the test specimen is carried out by calculating the final volume of the object from the results of measuring the dimensions of length, width, and thickness. figure 3. test specimen measurement point international journal of applied sciences and smart technologies volume 4, issue 1, pages 89–96 p-issn 2655-8564, e-issn 2685-9432 94 3 results and discussion the results of the measurement of the dimensions of the test specimens are shown in table 2. the results of these measurements were analyzed using the taguchi method with "smaller is better" characteristics. table 2. measurement results of test specimens 3.1. mean analysis of response parameters calculation analysis to determine the smallest value using the mean function. this is because the characteristics of smaller is better in finding the value of the smallest discrepancy/error. in figure 4 is shown at the smallest shrinkage at the level of 2. 605040 8000 7000 6000 5000 4000 3000 605040 hidroksiapatite m e a n o f m e a n s bioplastik main effects plot for means data means figure 4. graph of the analysis of the mean of the parameters affecting the response parameters 3.2. snr analysis of response parameters signal to noise ratio (snr) is useful for knowing the factors that influence the response. the characteristics of the snr used are the smaller is better function. with international journal of applied sciences and smart technologies volume 4, issue 1, pages 89–96 p-issn 2655-8564, e-issn 2685-9432 95 these characteristics, the largest snr value indicates the smallest error rate. figure 5 shows the smallest shrinkage response at level 2. 605040 -70 -71 -72 -73 -74 -75 -76 -77 -78 605040 hidroksiapatite m e a n o f s n r a t io s bioplastik main effects plot for sn ratios data means signal-to-noise: smaller is better figure 5. graph of snr analysis of parameters affecting response parameters 4 conclusion from the research process, it was found that the optimal composition of biocomposite material with the lowest shrinkage was the ratio of ha/bp 50/50 (w/w). in this study, only the composition of the biocomposite paste material for the smallest shrinkage was produced. further research is needed on the machine process parameters during the specimen printing process. references [1] m. vallet-regi, ceramics for medical applications. journal of chemical society, 97-107, 2001. [2] h.e. davis, and j.k leach, hybrid and composite biomaterials in tissue engineering. multifunctional biomaterials and devices, 1-26, 2013. [3] f. gapsari, p.h. setyarini, pengaruh fraksi volume terhadap kekuatan tarik dan lentur komposit resin berpenguat serbuk kayu. jurnal rekayasa mesin, 1(2), 5964, 2010. [4] a.e. tontowi, d.p. perkasa, a. mahulauw, erizal, experimental study on nanobiocomposite of [nha/bioplastic] for building a porous block scaffold. conference nanocon, pune, india, 2014. international journal of applied sciences and smart technologies volume 4, issue 1, pages 89–96 p-issn 2655-8564, e-issn 2685-9432 96 [5] a.e.tontowi, d. i. shafiqy., j.triyono., study on a layered photo composite of hydroxyapatite-bioplastic-camphorquinone composed by response surface method, international journal of applied engineering research. 10. research india publications, 2015. [6] e. pujiyanto., a.e.tontowi., m.w.wildan., w. siswomihardjo., porous hydroxyapatite–zirconia composites prepared by powder deposition and pressureless sintering, advanced materials research, 445, 463-468, trans tech publications, switzerland, 2012. [7] m. heynick, and i.stotz, tiga dimensi cad, cam and rapid prototyping, lapa digital technology seminar,1(1), 2006. [8] a bagsik, and v. schoppner, mechanical properties of fused deposition modeling parts manufactured with ultem*9085, antec, boston, 2011. [9] m.s. mason, t. huang, r.g. landers, m.c. leu, g.e. hilmas, m.w. hayes, aqueous-based extrusion fabrication of ceramics on demand, proceedings of solid freeform fabrication symposium, austin, tx, 124-134, 2007. international journal of applied sciences and smart technologies international journal of applied sciences and smart technologies volume 1, issue 1, pages 33–44 issn 2655-8564 33 implementation of k-medoids clustering algorithm to cluster crime patterns in yogyakarta eduardus hardika sandy atmaja department of informatics, faculty of science and technology, sanata dharma university, yogyakarta, indonesia corresponding author: edo@usd.ac.id (received 02-05-2019; revised 21-05-2019; accepted 21-05-2019) abstract the increase in crime from day to day needs to be a concern for the police, as the party responsible for security in the community. crime prevention effort must be done seriously with all knowledge that they have. to increase police performance of crime prevention effort, it is necessary to analyze crime data so that relevant information can be obtained. this study tried to analyze crime data to obtain relevant information using clustering in data mining. clustering is a data mining method that can be used to extract valuable information by grouping data into groups that have similar characters.the data used in this study were crime patterns which were then grouped using k-medoids clustering algorithm. the obtained results in this study were three crime groups, namely high crime level with 4 members, medium crimelevel with 6 members and low crime level with 8 members. it is expected that this information can be used as material for consideration in crime prevention effort. keywords: clustering, k-medoids, criminality international journal of applied sciences and smart technologies volume 1, issue 1, pages 33–44 issn 2655-8564 34 1 introduction crime is any act that is prohibited by public law to protect the public and given punishment by the state. these acts are punished because they are violating the social norms such as act that conflict with legal norms, social norms and religious norms that applied in the society [1]. the existence of punishment applied by law enforcement does not make the criminals undermine their intentions, and in fact criminal in yogyakarta are increasing widespread. the increase of criminal cases in the society can result in losses both materially and immaterially. for this reason, efforts are needed from law enforcement to reduce crime in the society. such efforts can be done by finding relevant information related to crime. such information can be obtained by processing and analyzing crime data owned by the police. the crime data owned by the yogyakarta police is still stored in the manual form such as register books and excel. the data is only stored and is not used to produce any information. where the data can be processed and analyzed to produce valuable information in efforts to prevent crime. data mining is aproper technique to extract important information from a data set. crime data owned by the police can be processed using data mining to become crime patternsthat represent relationship between crimes. the research was successfully done by atmaja [2], the result was crime patterns presented in graph form. the weakness in that study isthat there is no clear grouping on crime level form generated crime patterns. this study tried to refine previous research by groupings crime patterns into three categories, namely high crime level, medium crimelevel and low crime level. clustering is one of the data mining techniques that aims to group data based on information found in the data [3]. the grouping is based on the similarity between data so the data in the same cluster is homogeneous. thus clustering is a very appropriate method for classifying crime patterns into high, medium and low crime level. researches on implementation of clustering method have been done, as done by singh et. al. [4]. they tried to implement k-means clustering algorithm by using three different distance measurements namely euclidean, manhattan and chebychev. the international journal of applied sciences and smart technologies volume 1, issue 1, pages 33–44 issn 2655-8564 35 result is that the implementation of k-means algorithm using euclidean distance measurements can produce the best group from the other distance measurements. so it can be concluded that the best pair for k-means algorithm is the euclidean distance measurement. research on the use of euclidean distance in k-means algorithm has been successfully done by atmaja [5]. the aim of his study was to cluster crime data into three categories, namely high, medium and low crime level. although the objective of the research was achieved, k-means algorithm is classified as an ineffective algorithm because it involves too much noise and outliers caused by the average selection of clusters [6]. this study tried to improve previous study by replacing k-means algorithm with kmedoids algorithm. k-medoids algorithm is one of the clustering algorithms that are not influenced by outliers or other extreme variables [6]. k-medoids work by determining the center point of existing data without performing an average calculation as in kmeans.the following is the k-medoids algorithm [6]: figure 1. k-medoids algorithm the result of this studyis crime patterns that have been divided into three groups, namely high, medium and low crime level. it is expected that the police can use this information to improve crime prevention efforts in the society. 2 research methodology research methodology done by this research is activity steps to implement k-means algorithm to cluster crime patternsfrom yogyakarta police data which are presented in figure 2. international journal of applied sciences and smart technologies volume 1, issue 1, pages 33–44 issn 2655-8564 36 figure 2. research methodology figure 2 shows research methodology which began with literature study to study relevant theories related to solve problems. the next step wasdata collecting related to research, in this case the processed data was crime data from yogyakarta police. the crime data that has been collected then processed using association techniques in data mining to produce association rules that described crime patterns. generated rules was used as input to k-medoids algorithm to produce crime patterns accompanied by grouping based on low, medium and high crime level. the next step wasresult analyzing that has been obtained to find out whether the objective achieved or not. finally, the result analysis will draw conclusions from the research that has been done.suggestions were also given to correct existing disadvantages to be applied in the future research. literature study data collecting association rules generation clustering with k-medoids result analysis concluding result and giving advice international journal of applied sciences and smart technologies volume 1, issue 1, pages 33–44 issn 2655-8564 37 3 results and discussions 3.1 crime patterns there are 18 samples of crime patterns as results of association technique processing accompanied by support and confidence. the data will be grouped using the k-medoids algorithm based on variable support and confidence. these data are presented in table 1. table 1. crime patterns no. rules support confidence 1 if embezzlement then theft 0.02 0.03 2 if theft then embezzlement 0.02 0.29 3 if embezzlement then deception 0.54 0.81 4 if deception then embezzlement 0.54 0.82 5 if embezzlement then document forgery 0.02 0.03 … … … … 18 if unpleasant act then defamation 0.02 0.38 3.2 determining initial medoids in the first stage, three medoidswere randomly selected from data sample in table as shown in table 2. table 2. three initial medoids medoid c1 c2 c3 support 0.54 0.08 0.03 confidence 0.81 0.12 0.30 3.3 calculating euclidean distance iteration 1 the next step is euclidean distance calculation from each data to the three selected medoids. euclidean distance is calculated based on the following formula [6]: 𝑑(𝑖, 𝑗) = √(𝑥𝑖1 − 𝑥𝑗1) 2 + (𝑥𝑖2 − 𝑥𝑗2) 2 here, 𝑑(𝑖, 𝑗) represents distance between data andmedoid, 𝑥𝑖1 denotes support value in each data, 𝑥𝑗1 ismedoid (c) for support, 𝑥𝑖2 denotes confidence value in each data and 𝑥𝑗2 ismedoid (c) for confidence. table 3 presents results from euclidean distance international journal of applied sciences and smart technologies volume 1, issue 1, pages 33–44 issn 2655-8564 38 calculation on each data along with medoid information which has the shortest distance to the data. table 3. rules with euclidean distance rules support confidence distance to medoid shortest distance c1 c2 c3 1 0.02 0.03 0.937 0.108 0.270 0.108 2 0.02 0.29 0.735 0.180 0.014 0.014 3 0.54 0.81 0.000 0.829 0.721 0.000 4 0.54 0.82 0.010 0.838 0.728 0.010 5 0.02 0.03 0.937 0.108 0.270 0.108 6 0.02 0.41 0.656 0.296 0.110 0.110 7 0.09 0.13 0.815 0.014 0.180 0.014 8 0.09 0.97 0.478 0.850 0.673 0.478 9 0.02 0.03 0.937 0.108 0.270 0.108 10 0.02 0.41 0.656 0.296 0.110 0.110 11 0.08 0.12 0.829 0.000 0.187 0.000 12 0.08 0.94 0.478 0.820 0.642 0.478 13 0.03 0.30 0.721 0.187 0.000 0.000 14 0.03 0.86 0.512 0.742 0.560 0.512 15 0.02 0.23 0.779 0.125 0.071 0.071 16 0.02 0.69 0.534 0.573 0.390 0.390 17 0.02 0.44 0.638 0.326 0.140 0.140 18 0.02 0.38 0.675 0.267 0.081 0.081 from table 3, it can be seen that medoid c1 has 5 members rules {3,4,8,12,14}, medoid c2 has 5 members rules {1,5,7,9,11} and medoid c3 has 8 members rules {2,6,10,13,15,16,17,18}. 3.4 calculating total cost iteration 1 calculating total cost is the final step from iteration 1, by summing the shortest distance from data in table 3, so the total cost is 2.734. 3.5 determining random medoids iteration 2 the process continues to iteration 2 by selecting a new random medoid from the data to replace the medoid c3 temporarily. the selection of a new medoid should not be the international journal of applied sciences and smart technologies volume 1, issue 1, pages 33–44 issn 2655-8564 39 same as one of the medoids that has been selected. table 4 shows three medoids for iteration 2. table 4. three medoids iteration 2 medoid c1 c2 c random support 0.54 0.08 0.03 confidence 0.81 0.12 0.86 3.6 calculating euclidean distance iteration 2 after a new medoid has been determined, the next step is to recalculate the euclidean distance for each data based on three medoids from table 4. the results isshown in table 5. table 5. rules with euclidean distance iteration 2 rules support confidence distance tomedoid shortest c1 c2 c3 distance 1 0.02 0.03 0.937 0.108 0.830 0.108 2 0.02 0.29 0.735 0.180 0.570 0.180 3 0.54 0.81 0.000 0.829 0.512 0.000 4 0.54 0.82 0.010 0.838 0.512 0.010 5 0.02 0.03 0.937 0.108 0.830 0.108 6 0.02 0.41 0.656 0.296 0.450 0.296 7 0.09 0.13 0.815 0.014 0.732 0.014 8 0.09 0.97 0.478 0.850 0.125 0.125 9 0.02 0.03 0.937 0.108 0.830 0.108 10 0.02 0.41 0.656 0.296 0.450 0.296 11 0.08 0.12 0.829 0.000 0.742 0.000 12 0.08 0.94 0.478 0.820 0.094 0.094 13 0.03 0.30 0.721 0.187 0.560 0.187 14 0.03 0.86 0.512 0.742 0.000 0.000 15 0.02 0.23 0.779 0.125 0.630 0.125 16 0.02 0.69 0.534 0.573 0.170 0.170 17 0.02 0.44 0.638 0.326 0.420 0.326 18 0.02 0.38 0.675 0.267 0.480 0.267 international journal of applied sciences and smart technologies volume 1, issue 1, pages 33–44 issn 2655-8564 40 from table 5, it can be seen that medoid c1 has 2 members rules {3,4}, medoid c2 has 12 members rules {1,2,5,6,7,9,10,11,13,15,17,18} and medoid c3 has 4 members rules {8,12,14,16}. 3.7 calculating total cost iteration 2 calculating total cost is the final step from iteration 2, by summing the shortest distance from data in table 5, so the total cost is 2.416. to determine the next iteration, total cost from iteration 2 is compared with iteration 1, which is 2,416 > 2,734. because the total cost of iteration 2 is not greater than iteration 1, the iteration is continued to iteration 3 and the medoid c random replaces medoid c3. 3.8 determining random medoids iteration 3 the process continues to iteration 3 by selecting a new random medoid from the data to replace the medoid c3 temporarily (c random from iteration 2). the selection of a new medoid should not be the same as one of the medoids that has been selected. table 6 shows three medoids for iteration 3. table 6. three medoid iteration 3 medoid c1 c2 c random support 0.54 0.08 0.02 confidence 0.81 0.12 0.44 3.9 calculating euclidean distance iteration 3 after a new medoid has been determined, the next step is to recalculate the euclidean distance for each data based on three medoids from table 6. the results is shown in table 7. international journal of applied sciences and smart technologies volume 1, issue 1, pages 33–44 issn 2655-8564 41 table 7. rules with euclidean distance iteration 3 rules support confidence distance to medoid shortest distance c1 c2 c3 1 0.02 0.03 0.937 0.108 0.410 0.108 2 0.02 0.29 0.735 0.180 0.150 0.150 3 0.54 0.81 0.000 0.829 0.638 0.000 4 0.54 0.82 0.010 0.838 0.644 0.010 5 0.02 0.03 0.937 0.108 0.410 0.108 6 0.02 0.41 0.656 0.296 0.030 0.030 7 0.09 0.13 0.815 0.014 0.318 0.014 8 0.09 0.97 0.478 0.850 0.535 0.478 9 0.02 0.03 0.937 0.108 0.410 0.108 10 0.02 0.41 0.656 0.296 0.030 0.030 11 0.08 0.12 0.829 0.000 0.326 0.000 12 0.08 0.94 0.478 0.820 0.504 0.478 13 0.03 0.30 0.721 0.187 0.140 0.140 14 0.03 0.86 0.512 0.742 0.420 0.420 15 0.02 0.23 0.779 0.125 0.210 0.125 16 0.02 0.69 0.534 0.573 0.250 0.250 17 0.02 0.44 0.638 0.326 0.000 0.000 18 0.02 0.38 0.675 0.267 0.060 0.060 from table 7, it can be seen that medoid c1 has 4 members rules {3,4,8,12}, medoid c2 has 6 members rules {1,5,7,9,11,15} and medoid c3 has 8 members rules {2,6,10,13,14,16}. 3.10 calculating total cost iteration 3 calculating total cost is the final step from iteration 3, by summing the shortest distance from data in table 7. so the total cost is 2.510. to determine the next iteration, total cost from iteration 3 is compared with iteration 2, which is 2.510 > 2.416. because the total cost of iteration 3 is greater than iteration 2, the iteration stops. 3.11 results each medoid represents 1 group of crime level based on support and confidence. c1 represents high crime level, c2 represents medium crime level and c3 represents low crime level. the results of crime patterns grouping are shown in tables 8, 9 and 10. international journal of applied sciences and smart technologies volume 1, issue 1, pages 33–44 issn 2655-8564 42 table 8. high level crime patterns no. rules support confidence 1 if embezzlement then deception 0.54 0.81 2 if deception then embezzlement 0.54 0.82 3 if fiduciary then embezzlement 0.09 0.97 4 if information violation and electronic transaction then deception 0.08 0.94 table 9. medium level crime patterns no. rules support confidence 1 if embezzlement then theft 0.02 0.03 2 if embezzlement then document forgery 0.02 0.03 3 if embezzlement then fiduciary 0.09 0.13 4 if deception then document forgery 0.02 0.03 5 if deception then information violation and electronic transaction 0.08 0.12 6 if persecution then beating 0.02 0.23 table 10. low level crime patterns no. rules support confidence 1 if theft then embezzlement 0.02 0.29 2 if document forgery then embezzlement 0.02 0.41 3 if document forgery then deception 0.02 0.41 4 if persecution then domestic violence 0.03 0.3 5 if domestic violence then persecution 0.03 0.86 6 if beating then persecution 0.02 0.69 tables 8, 9 and 10, show that some crime patterns are classified as high and some others are classified as low. information about high level crime can be used by the police to prevent potential crime in the society. 4 conclusions it can be concluded that k-medoids algorithm can be used to cluster crime patterns into three crime levels namely, 4 rules classified as high level crime, 6 rules classified international journal of applied sciences and smart technologies volume 1, issue 1, pages 33–44 issn 2655-8564 43 as medium level crime and 8 rules classified as low level crime. suggestions that can be given based on the results of this study are: a) there is a need to compare some distance method for k-medoid algorithm. thus, it can be known the most appropriate distance calculation method for k-medoid algorithm. b) there is a need to apply weighting mechanism for each variable because not all variables have the same interests and priorities. acknowledgements the author thanks polisi resor kota yogyakarta (polresta yogyakarta) who provided crime data in this research by hiding some sensitive variables regarding victims and criminals. references [1] j. e. sahetapy and b. m. reksodiputro, “paradoks dalam kriminologi,” rajawali : jakarta, 1982. [2] e. h. s. atmaja, “visualisasi aturan asosiasi berbasis graph untuk data tindak kejahatan,” media teknika, 12 (1), 46−57, 2017. [3] p. tan, m. steinbach, and v. kumar, “introduction to data mining,” addisonwesley, boston, 2006. [4] a. singh, a. yadav, and a. rana, “k-means with three different distance metrics,” international journal of computer applications, 67 (10), 13−17, 2013. [5] e. h. s. atmaja, “pengelompokan tingkat kriminalitas di kota yogyakarta dengan menggunakan metode k-means clustering,” seminar nasional riset dan teknologi terapan, agustus 2018. [6] j. han, “data mining: concepts and techniques second edition,” morgan kaufmann, san francisco, 2006. international journal of applied sciences and smart technologies volume 1, issue 1, pages 33–44 issn 2655-8564 44 this page intentionally left blank international journal of applied sciences and smart technologies international journal of applied sciences and smart technologies volume 3, issue 1, pages 101–110 p-issn 2655-8564, e-issn 2685-9432 101 identity graph of finite cyclic groups maria vianney any herawati1,*, priscila septinina henryanti1, ricky aditya1 1department of mathematics, faculty of science and technology, sanata dharma university, yogyakarta, indonesia *corresponding author: any@usd.ac.id (received 26-03-2021; revised 24-04-2021; accepted 05-05-2021) abstract this paper discusses how to express a finite group as a graph, specifically about the identity graph of a cyclic group. the term chosen for the graph is an identity graph, because it is the identity element of the group that holds the key in forming the identity graph. through the identity graph, it can be seen which elements are inverse of themselves and other properties of the group. we will look for the characteristics of identity graph of the finite cyclic group, for both cases of odd and even order. keywords: graph, identity graph, group, identity element. 1 introduction mathematics as a science has several branches including abstract algebra and graph theory [1-3]. the phrase of abstract algebra has been used since the early 20th century to distinguish them from what is now more commonly referred to as elementary algebra, which is the study of the rules of manipulation of algebraic formulas and expressions involving real or complex variables and numbers [4-6]. abstract algebra is a field of international journal of applied sciences and smart technologies volume 3, issue 1, pages 101–110 p-issn 2655-8564, e-issn 2685-9432 102 mathematics that studies algebraic structures, such as monoids, groups, rings, fields, modules, etc. [3, 4]. students often find it difficult to learn abstract structure such as group. therefore, some writers are looking for a way to represent a group with a diagram called a graph. graph theory is a branch of mathematics that has been studied and developed by researchers. in its development, the application of graph theory is often found both in mathematics itself and in other fields such as computer science, biology, chemistry and in problems in human life such as transportation problems, installation of public facilities, and traffic light management. in this paper, graph theory will be applied in abstract algebra, especially to represent groups in the form of a graph so that it can be visualized diagrammatically and studied its properties through the graph of the group. the group discussed here is a finite group. there are several previous articles that examine graphs formed from groups including cayley graphs, g-graphs, coprime graphs, and identity graphs of dihedral groups. 2 methodology: notations and definitions the method used is literature study with the initial step of forming an identity graph of several cyclic groups then looking for general patterns of their properties, making conjectures and proving them. before going into those steps, in this section we will discuss some basic concepts and definitions in group theory and graph theory. group theory these are some definitions in group theory [4] which will be used in the next section: 1. group is a set with one binary operation on the set which fulfills associative properties, has an identity element, and each member of the group has an inverse. 2. order of a group is the number of its elements. a finite group is a group of finite order. let 𝑒 is the identity element of a finite group 𝐺. order of an element a in 𝐺 is defined as the smallest natural number 𝑛 such that 𝑎𝑛 = 𝑒. 3. let 𝐺 be a group. a non-empty subset 𝐻 ⊆ 𝐺 is called a subgroup of 𝐺 if and only if 𝐻 is also a group with the same operation defined in 𝐺 [4]. international journal of applied sciences and smart technologies volume 3, issue 1, pages 101–110 p-issn 2655-8564, e-issn 2685-9432 103 4. if g is a group and 𝑎 ∈ 𝐺, then the set 〈𝑎〉 = {𝑎𝑛 ∶ 𝑛 ∈ 𝑍} is a subgroup of 𝐺, and 〈𝑎〉 is called the cyclic subgroup generated by 𝑎. group 𝐺 is called 𝑎 cyclic group if and only if 𝑎 ∈ 𝐺 exists, such that 𝐺 = 〈𝑎〉 [4]. related to order of groups and order of elements, we have these two important theorems in group theory [4]: 1. (cauchy’s theorem) let 𝐺 be a finite group and p be a prime number. if 𝑝 divides the order of 𝐺, then 𝐺 has an element of order 𝑝. 2. (lagrange’s theorem) if h is a subgroup of a finite group 𝐺, then order of 𝐻 divides order of 𝐺. graph theory graph 𝐺 is a pair of finite sets (𝑉, 𝐸), written with the notation 𝐺 (𝑉, 𝐸), in which case 𝑉 is a non-empty set of vertices and 𝐸 is a non-empty set of edges connecting a pair of vertices or connect a vertex with the vertex itself. 𝐴 graph 𝐺 can be represented by a diagram, each vertex of 𝐺 is represented by a dot or small circle while an edge connecting two vertices is represented by a curve connecting the corresponding vertices in the diagram. 3 results and discussion in this section, we write our research results in terms of theorems and their proofs. some illustrations of graphs are also presented. first, we need to understand the concept of identity graph of a group. definition [2] let g be a group. the identity graph of group 𝐺 is a graph with the elements of group 𝐺 as its vertices which satisfies these properties: a) two elements 𝑥, 𝑦 in group 𝐺 are connected by an edge if 𝑥𝑦 = 𝑒, with 𝑒 is the identity element for group 𝐺. b) each element of 𝐺 is connected by an edge with the identity element 𝑒. to develop the previous research, we shall examine the identity graph of finite cyclic groups. there are two possibilities for the order of a finite cyclic group: it is an odd international journal of applied sciences and smart technologies volume 3, issue 1, pages 101–110 p-issn 2655-8564, e-issn 2685-9432 104 natural number, or it is an even natural number. the order may also be a prime number. we will examine the case of odd prime order first. theorem 1 [2] if 𝐺 = 〈𝑔 |𝑔𝑝 = 1, 𝑝 ≠ 2〉 is a cyclic group of the 𝑝th order, where p is prime, then the identity graph formed by 𝐺 consists of (𝑝 − 1)/2 triangles. proof: let 𝐺 = 〈𝑔 | 𝑔^𝑝 = 1, 𝑝 ≠ 2〉 be a cyclic group of the 𝑝th order, where p is prime. then 𝐺 does not have a proper subgroup, according to lagrange theorem. therefore, there is no element in 𝐺 having inverse which is itself; in other words, there is no 𝑔𝑖 ∈ 𝐺 such that (𝑔𝑖 ) 2 = 1. suppose there is 𝑔𝑖 in 𝐺 such that (𝑔𝑖 )^2 = 1. then, 𝐺 has a subgroup 𝐻 = {1, 𝑔𝑖 }. this contradicts with the fact that 𝐺 does not have a proper subgroup. as a result, 𝐺 does not have element of the 2nd order. using cauchy theorem with 𝑝 ≠ 2 then 𝐺 does not have element of the 2nd order. for every(𝑔𝑖) in 𝐺 there is exactly one inverse of 𝑔𝑖 that is 𝑔𝑗 such that 𝑔𝑖 𝑔𝑗 = 1 with 𝑖 and 𝑗 are positive integers and 𝑖 ≠ 𝑗. because 𝑔𝑖 𝑔𝑗 = 1 then 𝑔𝑖 𝑔𝑗 = 𝑔𝑖+𝑗 = 1 = 𝑔𝑝 such that 𝑝 = 𝑖 + 𝑗 which is equivalent to 𝑗 = 𝑝 − 𝑖. consequently, we can form the identity graph of 𝐺 as illustrated by figure 1. figure 1. identity graph of 𝐺 = 〈𝑔 | 𝑔𝑝 = 1, 𝑝 ≠ 2〉 𝑔 𝑝−1 2 +2 = 𝑔 𝑝+3 2 𝑔 𝑝−3 2 = 𝑔 𝑝+1 2 −2 𝑔 𝑝+1 2 𝑔 𝑝−1 2 𝑔𝑝−1 𝑔2 𝑔𝑝−2 𝑔 1 ⋯ international journal of applied sciences and smart technologies volume 3, issue 1, pages 101–110 p-issn 2655-8564, e-issn 2685-9432 105 we have seen a unique pattern of the identity graph of a finite cyclic group with odd prime order. we skip the case of even prime order since the only even prime number is 2, and the identity graph of group of order 2 looks too trivial and not so interesting. now we look at more general case of finite cyclic groups of odd order. theorem 2 [2] if 𝐺 is a cyclic group with an odd order, then 𝐺 has the identity graph 𝐺𝑖 which can be formed by triangles without a unique edge. proof: let 𝐺 = 〈𝑔 |𝑔𝑛 = 1〉 with n is an odd integer is a cyclic group with multiplication operation and is of the nth order. the elements of 𝐺 are {1, 𝑔, 𝑔2, 𝑔3, 𝑔4, ⋯ , 𝑔𝑛−1 }. we shall use the cayley table given by table 1 to show results of the operation for each element of 𝐺. table 1. cayley table for theorem 2. 1 𝑔 𝑔2 𝑔3 ⋯ 𝑔𝑛−3 𝑔𝑛−2 𝑔𝑛−1 1 1 𝑔 𝑔2 𝑔3 ⋯ 𝑔𝑛−3 𝑔𝑛−2 𝑔𝑛−1 𝑔 𝑔 𝑔2 𝑔3 𝑔4 ⋯ 𝑔𝑛−2 𝑔𝑛−1 1 𝑔2 𝑔2 𝑔3 𝑔4 𝑔5 ⋯ 𝑔𝑛−1 1 𝑔 𝑔3 𝑔3 𝑔4 𝑔5 𝑔6 ⋯ 1 𝑔 𝑔2 ⋮ ⋮ ⋮ ⋮ ⋮ ⋱ ⋮ ⋮ ⋮ 𝑔𝑛−3 𝑔𝑛−3 𝑔𝑛−2 𝑔𝑛−1 1 ⋯ 𝑔𝑛−6 𝑔𝑛−5 𝑔𝑛−4 𝑔𝑛−2 𝑔𝑛−2 𝑔𝑛−1 1 𝑔 ⋯ 𝑔𝑛−5 𝑔𝑛−4 𝑔𝑛−3 𝑔𝑛−1 𝑔𝑛−1 1 𝑔 𝑔2 ⋯ 𝑔𝑛−4 𝑔𝑛−3 𝑔𝑛−2 from the cayley table above (table 1) we can see that 𝑔𝑔𝑛−1 = 1, 𝑔2 𝑔𝑛−2 = 1, … , 𝑔𝑛−1𝑔 = 1. thus, for any non-identity element g, we have 𝑔𝑘 𝑔𝑛−𝑘 = 1, 𝑓𝑜𝑟 𝑘 ∈ international journal of applied sciences and smart technologies volume 3, issue 1, pages 101–110 p-issn 2655-8564, e-issn 2685-9432 106 𝑍+ and 𝑘 < 𝑛. from the definition of identity graph part (𝑖𝑖𝑖), for any elements 𝑎, 𝑏 ∈ 𝐺, 𝑎 ≠ 𝑏, 𝑎 ≠ 𝑒, 𝑏 ≠ 𝑒, there exists an edge which connects 𝑎 to 𝑏 if and only if 𝑎𝑏 = 𝑏𝑎 = 𝑒. therefore, there exists an edge which connects 𝑔𝑘𝑡𝑜 𝑔𝑛−𝑘, and since for every 𝑎 ∈ 𝐺, 𝑎 ≠ 𝑒 there exists an edge connecting 𝑎 to 𝑒, then from the definition of identity graph part (𝑖𝑖), there exists an edge from 𝑔𝑘 to 1. these will form a triangle connecting 1, 𝑔𝑘 and 𝑔𝑛−𝑘. moreover, since n is an odd number, then 𝑛 = 2𝑥 + 1 for any 𝑥 ∈ 𝑍+, and the number of non-identity elements in 𝐺 is even. so, those non-identity elements can be partitioned into two sets with same cardinality, where the inverse of an element in one set is in the other set and vice versa. therefore, there is no single edge in identity graph of 𝐺. then the identity graph which corresponds with 𝐺 is shown in figure 2. figure 2. identity graph of group 𝐺 = {𝑔 | 𝑔𝑛 = 1}, 𝑛 is an odd number. based on previous theorems, we can conclude a characterization of the identity graph of cyclic group of odd order in the following theorem 3. theorem 3 [2] if 𝐺 = 〈𝑔|𝑔𝑛 = 1〉 is a cyclic group of order n, where n is an odd number, then the identity graph 𝐺𝑖 of 𝐺 is formed by (𝑛 − 1)/2 triangles. proof: 1 𝑔 𝑔2 𝑔3 𝑔𝑛−1 𝑔𝑛−2 𝑔𝑛−3 ⋯ 𝑔𝑘 𝑔𝑛−𝑘 international journal of applied sciences and smart technologies volume 3, issue 1, pages 101–110 p-issn 2655-8564, e-issn 2685-9432 107 this can be proved using theorem 1 and figure 1. since the number of non-identity elements is even and those elements forms (n-1)/2 couples, where each couple is inverse to each other, then there will be (n-1)/2 triangles. similar with the result in the odd order case, we can obtain a characterization of the identity graph of cyclic group of even order case in the following theorem 4. the proof is also using similar principle with the odd order case. theorem 4 [2] if 𝐺 = 〈𝑔|𝑔𝑚 = 1〉 is a cyclic group of order m where m is an even number, then its identity graph 𝐺𝑖 has (𝑚 − 2)/2 triangle and a single edge. proof: table 2. cayley table for theorem 4. 1 𝑔 𝑔2 𝑔3 ⋯ 𝑔 𝑚 2 ⋯ 𝑔 𝑚−3 𝑔𝑚−2 𝑔𝑚−1 1 1 𝑔 𝑔2 𝑔3 ⋯ 𝑔 𝑚 2 ⋯ 𝑔 𝑚−3 𝑔𝑚−2 𝑔𝑚−1 𝑔 𝑔 𝑔2 𝑔3 𝑔4 ⋯ 𝑔 𝑚 2 +1 ⋯ 𝑔 𝑚−2 𝑔𝑚−1 1 𝑔2 𝑔2 𝑔3 𝑔4 𝑔5 ⋯ 𝑔 𝑚 2 +2 ⋯ 𝑔 𝑚−1 1 𝑔 𝑔3 𝑔3 𝑔4 𝑔5 𝑔6 ⋯ 𝑔 𝑚 2 +3 ⋯ 1 𝑔 𝑔 2 ⋮ ⋮ ⋮ ⋮ ⋮ ⋱ ⋮ ⋱ ⋮ ⋮ ⋮ 𝑔 𝑚 2 𝑔 𝑚 2 𝑔 𝑚 2 +1 𝑔 𝑚 2 +2 𝑔 𝑚 2 +3 ⋯ 1 ⋯ 𝑔 3𝑚 2 −3 𝑔 3𝑚 2 −2 𝑔 3𝑚 2 −1 ⋮ ⋮ ⋮ ⋮ ⋮ ⋱ ⋮ ⋱ ⋮ ⋮ ⋮ 𝑔𝑚−3 𝑔𝑚−3 𝑔𝑚−2 𝑔𝑚−1 1 ⋯ 𝑔 3𝑚 2 −3 ⋯ 𝑔 𝑚−6 𝑔𝑚−5 𝑔𝑚−4 𝑔𝑚−2 𝑔𝑚−2 𝑔𝑚−1 1 𝑔 ⋯ 𝑔 3𝑚 2 −2 ⋯ 𝑔 𝑚−5 𝑔𝑚−4 𝑔𝑚−3 𝑔𝑚−1 𝑔𝑚−1 1 𝑔 𝑔2 ⋯ 𝑔 3𝑚 2 −1 ⋯ 𝑔 𝑚−4 𝑔𝑚−3 𝑔𝑚−2 international journal of applied sciences and smart technologies volume 3, issue 1, pages 101–110 p-issn 2655-8564, e-issn 2685-9432 108 let 𝐺 = 〈𝑔 |𝑔𝑚 = 1〉, m is an even number, be a cyclic multiplicative group and its order is m. elements of 𝐺 are {1, 𝑔, 𝑔2, 𝑔3, 𝑔4, ⋯ , 𝑔𝑚−1 }. we will use the cayley table given by table 2 to show the operations between elements in 𝐺. from the cayley table above (table 2) we can see that 𝑔𝑔𝑚−1 = 1, 𝑔2 𝑔𝑚−2 = 1, … , 𝑔𝑚−1 𝑔 = 1. however, for 𝑔 𝑚 2 we have something different, that is 𝑔 𝑚 2 𝑔 𝑚 2 = 1. thus, for any non-identity elements other than 𝑔 𝑚 2 we have 𝑔𝑘 𝑔𝑚−𝑘 = 1, for 𝑘 ∈ 𝑍+ and 𝑘 < 𝑚. from the definition of identity graph part (iii), for any elements 𝑎, 𝑏 ∈ 𝐺, 𝑎 ≠ 𝑏, 𝑎 ≠ 𝑒, 𝑏 ≠ 𝑒, vertices 𝑎 and 𝑏 are adjacent if and only if 𝑎𝑏 = 𝑏𝑎 = 𝑒. this means the vertex 𝑔^𝑘 is adjacent with vertex 𝑔𝑚−𝑘 and since for each 𝑎 ∈ 𝐺, 𝑎 ≠ 𝑒 vertex 𝑎 is adjacent with 𝑒 based on definition of identity graph part (ii), then there exists an edge from 𝑔𝑘 to 1. these form a triangle connecting 1, 𝑔𝑘 dan 𝑔^(𝑚 − 𝑘). in other side, for 𝑔 𝑚 2 there will be only one edge connecting it, that is the edge which connects 𝑔 𝑚 2 with 1. moreover, since m is even, then 𝑚 = 2𝑥 for 𝑥 ∈ 𝑍+ and the number of elements which is not the identity and not 𝑔 𝑚 2 in 𝐺 is even. therefore, there will be (𝑚 − 2)/2 triangles and a single edge. the identity graph 𝐺𝑖 which corresponds with 𝐺 is given in figure 3. figure 3. identity graph of group 𝐺 = 〈𝑔| 𝑔𝑚 = 1〉, 𝑛 is an even number. 1 𝑔 𝑔2 𝑔3 𝑔𝑚−1 𝑔𝑚−2 𝑔𝑚−3 𝑔𝑘 𝑔𝑚−𝑘 𝑔 𝑚 2 ⋯ international journal of applied sciences and smart technologies volume 3, issue 1, pages 101–110 p-issn 2655-8564, e-issn 2685-9432 109 we see that in the even order case, there exists exactly one single edge that does not form a triangle. this happens because in a finite cyclic group of even order, there exists exactly one element of order 2. 4 conclusion from what we have discussed, there are some conclusions about identity graph of finite cyclic groups as following: a. identity graph of a group is a way to represent the relations between elements of a group in a graph. in identity graph of a group, the identity element of a group is connected with any other elements and each non-identity element is connected with its inverse. b. for cyclic group of odd order 𝑛, its identity graph consists of (𝑛 − 1)/2 triangles. there is no single edge in this case since in such group there is no element of order 2. c. for cyclic group of even order 𝑚, its identity graph consists of (𝑚 − 2)/2 triangles and a single edge. the single edge connects the identity element to the only element of such group that has order 2. references [1] a. bretto, a. faisant, l. gilibert, g-graphs: a new representation of group, journal of symbolic computation, 42, 549-560, 2007. [2] w. b. v. kandasamy, f. smarandache, groups as graphs. slantina: editura cuart. 2009. [3] s. lovett, abstract algebra. boca raton: crc press. 2016. [4] c. c. miller, essentials of modern algebra. dulles, va: mercury learning and information, 2013. [5] r. rajkumar, p. devi, coprime graph of subgroups of a group, https://www.semanticscholar.org [6] m. u. sherman-bennett, on groups and their graphs, ma: bard college, 2016. https://www.semanticscholar.org/ international journal of applied sciences and smart technologies volume 3, issue 1, pages 101–110 p-issn 2655-8564, e-issn 2685-9432 110 this page intentionally left blank international journal of applied sciences and smart technologies international journal of applied sciences and smart technologies volume 2, issue 2, pages 179–196 p-issn 2655-8564, e-issn 2685-9432 179 backpropagation neural network for book classification using the image cover i putu budhi darma purwanta 1,* , cyprianus kuntoro adi 1 , ni putu novita puspa dewi 1 1 department of informatics, faculty of science and technology, sanata dharma university, yogyakarta, indonesia * corresponding author: budhidarmap@gmail.com (received 14-06-2020; revised 26-08-2020; accepted 26-08-2020) abstract artificial neural networks are known to provide a good model for classification. the goal of this research is to classify books in bahasa (bahasa indonesia) using its cover. the data is in the form of scanned images, each with the size of 300 cm height, 130 cm width, and 96 dpi image resolution the research conducted features extraction using image processing method, mser (maximally stable externally regions) to identify the area of book title, and tesseract optical character recognition (ocr) to detect the title. next, features extracted from mser and ocr are converted into a numerical matrix as the input to the backpropagation artificial neural network. the accuracy obtained using one hidden layer and 15 neurons is 63.31%. meanwhile, the evaluation using 2 hidden layers with a combination of 15 and 35 neurons resulted in accuracy of 79.89%. the ability of the model to classify the book was affected by the image quality, variation, and number of training data. keywords: classification, image processing, mser, tesseract, text processing, backpropagation artificial neural network international journal of applied sciences and smart technologies volume 2, issue 2, pages 179–196 p-issn 2655-8564, e-issn 2685-9432 180 1 introduction book classification is a very common research topic. there are, however, not so many papers doing classification using book cover to represent book content. variety of backgrounds, fonts, and locations of book titles, as well as similar titles to represent different content, makes classification approach difficult [1]. iwana, et al. (2016) conducted classification using book cover. they employed features such as font types, color characteristics, color contrast, image characteristics, and writing characteristics [1]. meanwhile, this study focuses more on book title as a mean to classify book content using artificial backpropagation is a widely used to train feed-forward neural networks for supervised learning. backpropagation method computes the gradient of the loss or error function with respect to each weight by the chain rule. it calculates the gradient one layer at a time, iterating backward from the last layer to avoid redundant calculations of the intermediate terms in the chain rule. this way of computing similar to what dynamic programming do [2]. various papers employ back-propagation approach for their researches. karlissa, et al. (2018), for example, developed backpropagation-based autonomous control system for three-wheeled robot [3]. maneesukasem and pintavirooj (2012) used feed forward backpropagation to segment urine sediment image to identify crystals, casts, red blood cells, white blood cells and bacteria of yeast in urine sediment [4]. backpropagation processed information retrieval data to look for error tolerance in human information. backpropagation showed its ability to handle heterogeneous data such as handling features using different words [5]. mandl (2000), meanwhile, analyzed model-based human-computer interaction that integrate human knowledge into retrieval process in cognitive similarity learning in information retrieval (cosimir). it applies backpropagation to information retrieval, and integrates human-centered and tolerant computing to retrieval process [5]. the question is, then, how to extract features in data, specifically image data, before processing in backpropagation neural networks? some previous works show different types of image transformations using region detector methods, such as mser, harrisinternational journal of applied sciences and smart technologies volume 2, issue 2, pages 179–196 p-issn 2655-8564, e-issn 2685-9432 181 affine, hessian-affine, intensity-extreme-based regions, edge based regions, and salient [6–10]. it seemed that mser performed well on images that contained homogeneous regions with distinctive boundaries [6]. in addition, for hand-writing studies, mser method had been improved using additional canny edge detection and custom threshold upon data [11-14]. for image data such as book cover, a relevant feature representation to know the contents of the book through its cover is the title of the book [1]. however, the location and font type of book title are various. some object detection techniques, such as optical character recognition (ocr) [14-17] can be utilized to capture book title. the ocr method developed in keras deep learning and tesseract is worth to consider. keras deep learning, as an example, was able to identify characters with an error rate 3.72% [14]. meanwhile, tesseract – google open source optical character recognition – was capable of identifying roman and chinese characters with 90% accuracy [15–17]. the difficult part of classifying books using their title lies on two things, namely, how to relate book title to its content, and the complexity of word structures. first, book title versus its content. a book with title “investasi rohani” (english: “spiritual investment”) or “matematika pahala” (english: “mathematics of reward”), for example, neither discuss business nor mathematical problems. instead, they are about spiritual or religious matter. secondly, complex word structures in bahasa indonesia. bahasa has many combinations in suffixes and prefixes of its word structures [18 19]. it needs some methods to identify its root words in order to get clear information on the book title. some words change its meaning after additional suffix or prefix. the word “ajar” (english: “teach”), for example, gets suffix “ber-” becomes “belajar” (english: “learning”) instead of “berajar” (english: “teaching”). both of those words have different meanings. so, it needs an extra effort to create a dictionary that able to distinguish the words which have already changed due to additional suffix or prefix. motivated by the recent success of method in image processing and artificial neural networks, the goal of this study is to classify books in bahasa using the scanned image of its cover. this paper combines the technique of mser, tesseract, and ann backpropagation to propose better result in classification. the contributions of this paper are mainly in the text processing of bahasa and classification using the international journal of applied sciences and smart technologies volume 2, issue 2, pages 179–196 p-issn 2655-8564, e-issn 2685-9432 182 backpropagation neural networks. hopefully, this classification approach helps librarians and person in charged of bookstores finding books easier and faster. the second part of this paper discusses backpropagation method of classification, followed by its result and discussion in third part. this paper highlights its main contribution and conclusion in the fourth part. 2 methodology figure 1 shows the diagram of the research. there are two important steps, namely, feature extraction and classification. feature extraction includes data preprocessing to highlights area of the text and removes unnecessary background, optical character recognition (ocr), and feature extraction of the ocr result. the classification step builds classifier using data training and its label. the performance of classifier is evaluated using data testing. the optimal model is determined through k-fold combination of data training and data testing. figure 1. block diagram of classification method 2.1 feature extraction feature analysis or extraction in this research consists of three processes, namely: preprocessing, character detection with ocr and text processing to identify words that formed book title. after explanation of data, each process is presented. international journal of applied sciences and smart technologies volume 2, issue 2, pages 179–196 p-issn 2655-8564, e-issn 2685-9432 183 2.1.1 data data of scanned image book-covers are from the kanisius yogyakarta publisher, categorized into 3 classes: 53 of philosophy books, 101 of religious books, and 200 of education books. each image has size of 300 cm height and 130 cm width with 96 dpi image resolution. every image has a label which assigned to its correct class. the labeling process is done manually. in order to compare the performance of the classification model, the training process is done using 2 types of data: the first is data of the book title recognized by ocr and the second is data without ocr. the labelling process of the data took the same process of the image preprocessing data. 2.1.2 image prepocessing as a book title might have different color and gradation from its background, the grayscale image shown in figure 2 is used as the input of the process of text detection by mser. mser captured the object on book cover using parameter thresholds of 12 and 20 and 1200 of region area. the result of mser was the coordinate value of the object on the book cover. the detected object may contain the title and the background. after getting the object coordinates, then the process changes the background by modifying the value of each point of 0 (for non-title text) and 1 for text title. figure 3 shows the detection result of mser, and figure 4 shows the result of the changing background of book title. figure 2. book cover preprocessing: original grayscale binary form. international journal of applied sciences and smart technologies volume 2, issue 2, pages 179–196 p-issn 2655-8564, e-issn 2685-9432 184 figure 3. detection with mser. figure 4. modification of book cover background (value 0) and book title (value 1) the mser capabilities can be improved by adding canny edge detection in the process. this research tunes some parameters to find out the optimal mser detection. it starts with threshold number of 12 for area filtering, and sets the region area between 20 and 1200. this tuning is able to handle more images, but in some cases it removes some part of the book title. the different setup using a smaller number of 5 for the threshold, and region area between 20 and 800 results in better mser detection as shown in figure 5. finally this paper used the combination of 2 values. in the first run, it sets up the threshold into 12 and the region area was between 20 and 1200. if the value was less than 1, the threshold was set up into 5 and the region area was set up between 20 and 800. as figure 5 showed, when it sets up a smaller value, the title detection result were getting better, but other objects outside the title of the book were also included as part of the book title. figure 5. (a) using region area of 20-1200 and 12 threshold delta; (b) using region area of 20-800 and 10 threshold delta. 2.1.3 optical character recognition (ocr) image preprocessing with mser allocates book title area for further recognition. tesseract ocr, then converts image to texts that are part of a book title. variation of international journal of applied sciences and smart technologies volume 2, issue 2, pages 179–196 p-issn 2655-8564, e-issn 2685-9432 185 image sizes show the performance of tesseract ocr to recognize word inside the image. table 1 provides minimum image size tesseract ocr able to identify. image with minimum size of 70px seems a better candidate for character recognition. the words or terms identified in ocr process then become input for next process, namely: text processing. table 1. comparison of text images recognition image explain result image with height 17px and line weight 4px identified height and line weight are identified image with height 14px and line weight 4px identified height and line weight are identified image with height 10px and line weight 2px unidentified height is unidentified image with height 11px and line weight 1px identified line weight is unidentified 2.1.4 text processing text processing simply means to bring text into a form that is analyzable for certain task. there are different ways to process text. to find out words from certain text, this research uses steps as follows: case-folding (lowercasing), tokenization (separate words, characters from a text), stop-word removal (removing low information words from text) and stemming (removing prefix and suffix, and reducing inflection in word to its root form). the result is a bag of words modeling word database that store unique words retrieved from book cover. table 2 illustrates how text data is represented in numeric vector based on how many terms or words are in the text. table 3 shows a guideline for cutting off word prefix in bahasa. table 4 shows how words or terms extracted from book cover is represented in a database or term matrix. matrix size depends on how many words are in the database. table 2. illustration of transform string data into numeric data database data “one” “two” “five” “one two one” 2 1 0 “two three” 0 1 0 “five one four” 1 0 1 “two five” 0 1 1 international journal of applied sciences and smart technologies volume 2, issue 2, pages 179–196 p-issn 2655-8564, e-issn 2685-9432 186 table 3. list of prefix change [19] prefix phoneme or terms changing meng/r, l, m, n, w, y, ng, ny/ me /p, b, f, v/ mem /t, d, c, j, z, sy/ meng phrase< more than one words menge peng /r, l, m, n, w, y, ng, ny/ pe /p, b, f, v/ pem /t, d, c, j, z, sy/ peng phrase< more than one words menge ber/r/ be /ajar/ bel peraffinity pe /ajar/ pel ter/r/ te table 4. data input illustration no words database title 'pahargy an' 'bojana' 'kurban' 'raka' … 'manusia' 1 'pahargyan bojana kurban' 1 1 1 0 … 0 2 'raka agung sebuah renungan' 0 0 0 1 … 0 3 'kurban untuk allah' 0 0 1 0 … 0 … … … … … … … … 354 'filsafat manusia' 0 0 0 0 … 1 2.2 classification with backprpagation neural network this section describes the classification model architecture and the experimental setup applied to the classification model training and testing process. international journal of applied sciences and smart technologies volume 2, issue 2, pages 179–196 p-issn 2655-8564, e-issn 2685-9432 187 2.2.1 classification with backprpagation neural network the neural network architecture for classification is shown in figure 6. the input features are word vectors extracted from book cover images. the size of the input matrix is ( ), where is the number of word features (in this case, 489 features) and is the number of image data. this research employs two hidden layers with two neuron output. the optimal architecture is found through combination number of neuron inside the hidden layer. the neuron would be varied from 5 to 40; with log-sigmoid transfer function (equation (1)) inside the hidden layer and pure-linear transfer function (equation (2)) for the output, where: (1) (2) figure 6. network architecture for training and testing [20] 2.2.2 experimental setup backpropagation artificial neural network in this experiment builds a classifier model of a set of data that contains both the inputs and the desired outputs. the data, known as training data, consists of a set of training examples. each training example is represented by an array of vector, sometimes called a feature vector, and the training data is represented by a matrix. in this study, the data matrix consists of 348 samples with 489 features and a class label. three fold cross validation approach for obtaining best model accuracy is employed as follows: 53 philosophical books are divided into 18 international journal of applied sciences and smart technologies volume 2, issue 2, pages 179–196 p-issn 2655-8564, e-issn 2685-9432 188 data for training, 18 for validations, and 17 for data testing; 101 religious books are separated into 34 samples for training, 34 for validation, and 33 for testing; meanwhile 200 education books are divided into 67 sample training, 67 data validation and 66 data testing (see table 5). three fold data training and label training build three classifier model. each model is evaluated using its corresponding testing data. the overall accuracy is taken from its average. the experiment started with one hidden layer neural network architecture with variation of number of neuron in the hidden layer, followed by two hidden layer neural network architecture as a way to find out an optimal architecture. table 5. data composition for training and testing the classification model label class training data validation data testing data religious 34 34 33 school education 67 67 66 philosophical 18 18 17 3 results and discussion this section presents the experimental results. the experimental results present the performance of the proposed model on using 1 and 2 hidden layers. in regard to all the experiments done, we also presented the optimal model. to ensure the reliability of the model in performing classification, the proposed model will be compared with 2 other models. 3.1 model with one hidden layer this research trained the classifier model using a combination of neurons (5, 10, 15, 20, 25, 30, 35 and 40) in the first hidden layer. the classifier model is build using training data and is evaluated using testing data that are preprocessed with ocr and data that are not preprocessed with ocr. figure 7 shows that classifier model trained with data preprocessed without ocr outperformed model trained with ocr. it seemed that image preprocessing and ocr are not able to offer good features for classifier model building. on average, the performance are 10% below classifier model that is based on non-ocr feature (data label are identified manually). international journal of applied sciences and smart technologies volume 2, issue 2, pages 179–196 p-issn 2655-8564, e-issn 2685-9432 189 figure 7. comparison of the results between ocr data and non-ocr using 1 hidden layer 3.2 model with two hidden layer the second model is developed from the previous section by adding one more hidden layer. this classifier model is trained and tested using both data preprocessed with ocr and data not processed with ocr. figure 8 shows better performance of the model with 2 hidden layers, trained with ocr data and 15 neurons in the first layer, where the accuracy of the model increased and outperformed the model with only 1 hidden layer. the highest accuracy is 63.31%. this research utilized non-ocr data into account to observe its performance. this two-hidden-layer neural networks consist of 25 neurons in the first layer and various number of neurons in the second hidden layer. figure 8 shows that the highest accuracy of 79.89% is obtained using 15 neurons in the second hidden layer. the lowest accuracy of 67.82% is obtained at 5 neurons in the second hidden layer. the gap between the lowest and the highest accuracy is 12%. the results implied that there is an increase in the performance of the model trained with non-ocr data with 2 hidden layers neural network architecture. as can be seen from figures 8, the non-ocr data has reach higher accuracy compared to classifier model trained with ocr data. international journal of applied sciences and smart technologies volume 2, issue 2, pages 179–196 p-issn 2655-8564, e-issn 2685-9432 190 figure 8. model performance of two-hidden-layer backpropagation neural networks 3.3 optimal model from the experiment done, this research obtained the optimal model with two hidden layer neural network architecture as seen in figure 9. this optimal model employed 489 dimension feature input, with log-sigmoid activation function (equation 1) both in the first and second hidden layer, 15 neurons in the first hidden layer and 35 neurons in the second hidden layer. the output layer used a linear activation function (equation 2) and 2 neurons to represent the class output. table 6 presents one of the results of the confusion matrix of 3 fold cross-validation applied to the most optimal model. accuracy calculation of the model (table 6, equation 3) where sum of true positive (tp) and true negative (tn) divide by sum of tp, tn, false positive (fp) and false negative (fn). figure 9. architecture of the optimal model ( ) ( ) (3) 40 50 60 70 80 90 0 5 10 15 20 25 30 35 40 45 ocr non-ocrneurons a c c u ra c y ( % ) international journal of applied sciences and smart technologies volume 2, issue 2, pages 179–196 p-issn 2655-8564, e-issn 2685-9432 191 table 6. confusion matrix pre dict true religion education philosophy religion 12 11 10 education 4 55 7 philosophy 2 2 13 3.4 single data testing we evaluated the proposed model with new data testing. tables 7 and 8 show the classification results. based on the results, no model reached 90% of accuracy. some different books are classified as the same category. book with part of the title contained term “katolik”, for example, may belong to class religion or education. the model, however, classified them as similar category: religion book. it is possible that the classifier had difficulties in identifying those two classes due to high similarity of books title, or limited data training to model the classifier. 3.5 comparison on other classification methods the proposed model then compared with two different classification methods, namely naïve bayes and support vector machines (svm) using the same data. a comparison of the average accuracy result is shown in table 9. these results confirmed that the proposed method has outperformed the accuracy of other models. it has a gap of 2% with the naïve bayesian probabilistic classifier and a gap of 12% with svm using a polynomial kernel. 3.6 discussion the highest accuracy of this research obtained using 2 hidden layer models with combinations of 15 and 35 neurons was 63%. due to several factors (i.e. low image resolution), as many as 32 of 354 images failed to be detected by mser. in these 32 images, bias occurred between the font and its background. some titles also blended with the background image of the cover so that mser is unable to detect it clearly. in the ocr processing, there are a number of problems in the identification, especially those related the size and thickness of the font. smaller and thin fonts are difficult to identify. as much as 61% of data testing was successfully detected by tesseract, 27% was incorrectly detected, and 12% was not able to detect. the text processing international journal of applied sciences and smart technologies volume 2, issue 2, pages 179–196 p-issn 2655-8564, e-issn 2685-9432 192 contributed in the increasing of 16% correct classification. as mentioned above, when the model met data that contained similar word from different classes, it had difficulties in identifying the difference due to high similarity of books title, or limited data training to model the classifier. this research then did some data preprocessing without using mser and tesseract preprocessing. it seemed that the accuracy increase to 83.33%. the proposed model performed well especially in handling data failed to detect with former model. table 7. result of single data testing using model with one hidden layer table 8. result of single data testing using model with two hidden layer number of neuron on first hidden layer acc (%) 5 50 10 50 15 66.67 20 50 25 50 30 50 35 66.67 40 50 number of neuron on first hidden layer number of neuron on second hidden layer acc (%) 15 5 83.33 15 10 83.33 15 15 66.67 15 20 66.67 15 25 66.67 15 30 83.33 15 35 83.33 15 40 66.67 table 9. comparison of model accuracy method average accuracy model (%) average of misclassified data backpropagation 63.31 129 naïve bayesian 61.29 137 svm 51.44 171 international journal of applied sciences and smart technologies volume 2, issue 2, pages 179–196 p-issn 2655-8564, e-issn 2685-9432 193 4 conclusion backpropagation artificial neural network classifier were able to correctly identify book class or category based on title written in the book cover. modeling using data without preprocessed with ocr offered better performance than data preprocessed with ocr. the highest accuracy of 63.31% was obtained for model trained with ocr using two hidden layers with 15 neurons in the first hidden layer and 35 neurons in the second hidden layer. meanwhile, the accuracy of 79.89% was obtained for classifier model build without ocr preprocessed data, using two hidden layer with 25 neurons in the first layer and 15 neurons in the second hidden layer. this approach outperformed two other different models, namely naïve bayes and support vector machines, for the same data. the ability of the model to carry out the classification depends on the image quality, data variation, and the number of training data. to increase the performance of the classifier, some additional observations such as image quality and variation, different image preprocessing method, vocabulary variations, and different modeling approaches are worth to consider. declaration of interest the authors report no conflicts of interest. the authors themselves are responsible for the content and writing of this article. acknowledgements the authors acknowledge the kanisius yogyakarta publisher for their support and encouragement especially for providing the access for book cover images. references [1] b. k. iwana, s. t. r. rizvi, s. ahmed, a. dengel, and s. uchida, “judging a book by its cover.” 2016. [2] i. goodfellow, y. bengio, and a. courville, deep learning. mit press, 2016. [3] k. priandana, i. abiyoga, w. wulandari, s. wahyuni, m. hardhienata, and a. buono, “development of computational intelligence-based control system using backpropagation neural network for wheeled robot.” in iee international international journal of applied sciences and smart technologies volume 2, issue 2, pages 179–196 p-issn 2655-8564, e-issn 2685-9432 194 conference, 17, pp. 101–106, 2016. [4] w. maneesukasem and c. pintavirooj, “urine sediment image segmentation based on feedforward backpropagation neural network.” in the 5th biomedical engineering international conference (bmeicon), pp. 3–6, 2012. [5] t. mandl, “tolerant information retrieval with backpropagation networks.” neural comput. appl., 9 (4), pp. 280–289, 2000. [6] k. mikolajczyk et al., “a comparison of affine region detectors.” int. j. comput. vis., 65 (1), pp. 43–72, 2005. [7] j. matas, o. chum, m. urban, and t. pajdla, “robust wide-baseline stereo from maximally stable extremal regions.” image vis. comput., 22 (10), pp. 761–767, 2004. [8] d. nistér and h. stewénius, “linear time maximally stable extremal regions.” in forsyth d., torr p., zisserman a. (eds) computer vision – eccv, 5303, berlin, heidelberg: springer, pp. 183–196, 2008. [9] x. shen, g. hua, l. williams, and y. wu, “dynamic hand gesture recognition: an exemplar-based approach from motion divergence fields.” image vis. comput., 30 (3), pp. 227–235, 2012. [10] q. zhang, y. wang, and l. wang, “registration of images with affine geometric distortion based on maximally stable extremal regions and phase congruency.” image vis. comput., 36, pp. 23–39, 2015. [11] o. noboyuki, “a tlreshold selection method from gray-level histograms.” 20 (1), pp. 62–66, 1979. [12] h. chen, s. s. tsai, g. schroth, d. m. chen, r. grzeszczuk, and b. girod, “robust text detection in natural images with edge-enhanced maximally stable extremal regions.” in 18th ieee international conference on image processing, pp. 2609– 2612, 2011. [13] m. r. islam, c. mondal, m. k. azam, and a. s. m. j. islam, “text detection and recognition using enhanced mser detection and a novel ocr technique.” in 5th international conference on informatics, electronics and vision (iciev), pp. 15– 20, 2016. international journal of applied sciences and smart technologies volume 2, issue 2, pages 179–196 p-issn 2655-8564, e-issn 2685-9432 195 [14] z. zhang, k. qi, k. chen, c. li, j. chen, and h. guan, “a novel system for robust text location and recognition of book covers.” in in: zha h., taniguchi r., maybank s. (eds) computer vision – accv 2009. lecture notes in computer science, 5995, berlin, heidelberg: springer, pp. 608–609, 2010. [15] r. smith, “an overview of the tesseract ocr engine.” in ninth international conference on document analysis and recognition (icdar), 2, pp. 629–633, 2007. [16] r. smith, d. antonova, and d. lee, “adapting the tesseract open source ocr engine for multilingual ocr.” in proceedings of the international workshop on multilingual ocr , pp. 1–8, 2009. [17] google, “tesseract open-source ocr.” [online]. available: https://opensource.google.com/projects/tesseract, 2018. [18] c. d. manning, p. raghavan, and h. schutze, an introduction to information retrieval. cambridge, united kingdom: cambridge university press, 2009. [19] m. mustakim, seri penyuluhan bahasa indonesia: bentuk dan pilihan kata. jakarta: pemasyarakatan pusat pembinaan dan bahasa, badan pengembangan dan pembinaan kementerian pendidikan dan kebudayaan, 2014. [20] m. t. hagan and m. h. beale, neural network design, 2nd ed. oklahoma, 2014. international journal of applied sciences and smart technologies volume 2, issue 2, pages 179–196 p-issn 2655-8564, e-issn 2685-9432 196 this page intentionally left blank international journal of applied sciences and smart technologies international journal of applied sciences and smart technologies volume 3, issue 1, pages 131–142 p-issn 2655-8564, e-issn 2685-9432 131 classification of toddler nutrition using c4.5 decision tree method kartono pinaryanto1,*, robertus adi nugroho1, yanuarius basilius1 1department of informatics, faculty of science and technology, sanata dharma university, yogyakarta, indonesia *corresponding author: kartono@usd.ac.id (received 07-05-2021; revised 15-06-2021; accepted 16-06-2021) abstract nutrition is very much needed in the growth of toddlers. it is very important to give babies a balanced nutritional intake at the right stage so that the baby grows healthy and is accustomed to a healthy lifestyle in the future. children under five years of age are a group that is vulnerable to health and nutrition problems. in determining the nutritional status, it can be done in a system manner using the c4.5 decision tree classification method and entering several variables or attributes. the dataset tested was 853 toddlers. classification is carried out to determine the nutritional status based on the weight/age (bb/u), height/age (tb/u) and weight/height (bb/tb) categories. the attributes used for the classification of bb/u are gender, weight and age. the attributes used for tb/u are gender, body length or height, and age. the attributes used for bb/tb are gender, weight, body length or height, and age. the average accuracy of the bb/u category is 90.16%, the average accuracy of the tb/u category is 76.64%, and the average accuracy of the bb/tb category is 83.83%. keywords: classification, decision tree, c4.5, nutrition for toddlers international journal of applied sciences and smart technologies volume 3, issue 1, pages 131–142 p-issn 2655-8564, e-issn 2685-9432 132 1 introduction nutrients are organic substances required for normal functioning of the body’s systems, growth and health maintenance. it is very important to give babies a balanced nutritional intake at the right stage so that the baby grows healthy and is accustomed to a healthy lifestyle in the future. children under five years of age are a group that is vulnerable to health and nutrition problems, so that the toddler years are an important period of growth and need serious attention [1]. based on the results of the 2018 ministry of health's basic health research, 17.7% of infants under 5 years of age (toddlers) still experience nutritional problems. this figure consisted of under-fives who suffered from malnutrition by 3.9% and those suffering from malnutrition by 13.8% [2]. the nutritional status of toddlers can be measured anthropometry, anthropometric indices are often used, namely: body weight for age (bb/u), height for age (tb/u), body weight for height (bb/tb). the weight index based on age (bb/u) is the most commonly used indicator because it has the advantage of being easy and quicker to understand by the general public. the reference standard used for determining nutritional status by anthropometry is based on the decree of the minister of health no. 920/menkes/sk/viii/2002, to use the reference book of the “world health organization-national center for health statistics” (who-nchs) by looking at the z-score. in determining the nutritional status, it has been done manually by the community health centers, so patients have to come physically to the community health centers. this is of course very troublesome especially in the current pandemic situation and conditions. determining nutritional status can be done automatically using a classification approach. one approach that can be taken is to use the c4.5 decision tree method. the c4.5 method is an algorithm that works by applying the concept of a decision tree. a decision tree is a predictive model using a tree structure or hierarchical structure. the concept of a decision tree is to transform data into a decision tree with decision rules. in previous research on the comparison of the performance of the c4.5 and naive bayes algorithms for the classification of scholarship recipients by choirul anam and international journal of applied sciences and smart technologies volume 3, issue 1, pages 131–142 p-issn 2655-8564, e-issn 2685-9432 133 harry budi santoso, stated that the c4.5 algorithm has better performance than naive bayes with the level of accuracy obtained using the c4.5 algorithm of 96.4%, while the accuracy rate of naïve bayes is 95.11% [3]. based on research [4] on the classification of typhoid fever (tf) and dengue hemorrhagic fever (dhf) by applying the c4.5 decision tree algorithm. it can be concluded that by using the k-folds cross validation test, the highest average accuracy value is 91.875% using 32 test data and 128 training data. from the description above, a study was conducted using the c4.5 decision tree method in determining the nutritional status of children under five. it is hoped that applying the c4.5 decision tree method can help classify the nutritional status of toddlers to determine the growth of children under five. 2 methodology the methodology used in this study is as follows (figure 1). figure 1. decision tree classification research methodology c4.5 the research began to prepare the dataset, then the dataset went through the cleaning process and continued with data selection. the next stage, the data will be divided into testing data and training data. training data will be used to form a decision tree, while testing data will be used to evaluate the system being created. in the next sub-section, it will be explained in detail about the stages that are passed. 2.1. dataset the dataset used in this study is the monitoring data on the nutritional status of toddlers, obtained from the kebong health center, kelam permai district, sintang international journal of applied sciences and smart technologies volume 3, issue 1, pages 131–142 p-issn 2655-8564, e-issn 2685-9432 134 district, west borneo in 2017 with a total of 853 toddlers. monitoring data on the nutritional status of toddlers has three categories, namely the category of body weight according to age (bb/u), height for age (tb/u), and body weight for height (bb/tb). the bb/u category has 4 classification labels namely best, good, bad and worst. the tb/u category has 4 classification labels namely high, normal, short, and very short. while the bb/tb category has 4 classification labels namely fat, normal, thin, and very thin (table 1). table 1. categories and labels no category label 1 bb/u best, good, bad, worst 2 tb/u high, normal, short, very short 3 bb/tb fat, normal, thin, very thin 2.2. data cleaning data cleaning is a process for cleaning unused data [5]. in this study, some data were deleted because were incomplete. an example of deleted data is that it does not have a bb/tb label, has no pb/tb value, and does not have a tb/pb conversion value. 2.3. data selection in the dataset, there are 19 attributes, including name, date of birth, gender m/f, body weight, pb/tb, measured position, age, age calculation process, conversion of tb/pb, age family, code, code1, code2, nutritional standards poor bb/u, nutritional standards good bb/u, short pb/u or tb/u standards, normal pb/u or tb/u standards, weight standards bb/tb or bb/tb, and normal standards of bb/tb or bb/tb. at the data selection stage, the attributes used for the classification were determined (feature selection). in the selection of attributes, the attributes of gender m/f, body weight, pb/tb and age were selected. these attributes were selected based on recommendations from the health center. the results of the attribute selection are shown in table 2. international journal of applied sciences and smart technologies volume 3, issue 1, pages 131–142 p-issn 2655-8564, e-issn 2685-9432 135 table 2. attributes used by each category no category attribute label 1 bb/u gender m/f, body weight, age best, good, bad, worst 2 tb/u gender m/f, pb/tb, age high, normal, short, very short 3 bb/tb gender m/f, body weight, pb/tb, age fat, normal, thin, very thin 2.4. dividing the dataset the dataset is divided into testing data and training data using 𝑘-folds validation. the number of 𝑘 is chosen by the user where the values of 𝑘 are 3, 5, 7 and 9 folds. if the value of 𝑘 = 3, then the data is divided into 3 parts, 2 parts used for training data and 1 part for testing data, and likewise for dividing the value of 5, 7 and 9 folds. 2.5. modeling c4.5 decision tree every fold is modeled using the c4.5 decision tree method, so that there are 𝑛 models for each 𝑛 folds. the c4.5 decision tree method classifies the data by looking for the value of entropy, information gain, split info and gain ratio. tree formation begins with finding the highest gain ratio value to become the root node, then for leaf nodes it is carried out recursively until a decision tree is formed [6]. the following is an example of a tree formation step: 1. prepare the data that will be used for the formation of the c4.5 decision tree model. in this example, 9 data on children under five are used for the classification of the bb / u category with the attributes used according to table 3. 2. separating data into training data such as table 4 and testing data as in table 5 with a total of 3 folds. international journal of applied sciences and smart technologies volume 3, issue 1, pages 131–142 p-issn 2655-8564, e-issn 2685-9432 136 table 3. dataset gender m/f body weight age bb/u 1 8 9 good 1 7.8 8 good 1 10.1 8 good 2 6.1 6 good 2 4.6 6 worst 2 10 44 worst 2 7.3 27 worst 2 8.9 17 worst 1 8.1 26 worst table 4. data training gende r m/f body weight age bb/u 1 8 9 good 1 7.8 8 good 1 10.1 8 good 2 6.1 6 good 2 4.6 6 worst 2 10 44 worst table 5. data testing gender m/f body weight age bb/u 2 7.3 27 worst 2 8.9 17 worst 1 8.1 26 worst 3. calculating entropy using formula (1), information gain using formula (2), split info using formula (3), and calculating the gain ratio using formula (4) for each attribute. the entropy is formulated as entropy(𝑆) = ∑ −𝑝𝑖 ∗ log2 𝑝𝑖 𝑛 𝑖=1 . (1) description of formula (1) follows: 𝑆 is the set of cases, 𝑛 is the number of partitions 𝑆 and 𝑝𝑖 is the proportion of 𝑆𝑖 to 𝑆. the gain is formulated as gain (𝑆, 𝐴) = entropy(𝑆) − ∑ |𝑆𝑖 | |𝑆| 𝑛 𝑖=1 ∗ entropy(𝑆𝑖). (2) description of formula (2) follows: 𝑆 is sample, 𝐴 is attribute, 𝑛 is the number of partitions of the attribute set 𝐴, |𝑆𝑖 | is the number of samples on the partition, and |𝑆| is the number of samples in 𝑆. now we formulate the split info as international journal of applied sciences and smart technologies volume 3, issue 1, pages 131–142 p-issn 2655-8564, e-issn 2685-9432 137 splitinfo(𝑆, 𝐴) = − ∑ |𝑆𝑖 | |𝑆| 𝑣 𝑖=1 × log2 ( |𝑆𝑖 | |𝑆| ) . (3) description of formula (3) follows: 𝑣 is the subset resulting from solving using attribute 𝐴 which has as many as 𝑣 values. then, we have the gain ratio as gainratio(𝑆, 𝐴) = gain(𝑆, 𝐴) splitinfo(𝑆, 𝐴) . (4) next, look for the root node candidates by looking for the highest information gain value for each attribute. determine the root node by finding the highest gain ratio value for each candidate. the highest gain ratio value is found in the weight attribute with a variable value of 4.6, thus the root node of the tree is weight b. with a value of 4.6. the decision tree formed from the calculation is shown in figure 2. figure 2. root node 4. after getting the root node, then we do a leaf node search. data with a weight value of 4.6 are deleted / removed from the dataset before searching for leaf nodes (table 6). table 6. the dataset table at node 2 gender m/f body weight age bb/u 1 8 9 good 1 7.8 8 good 1 10.1 8 good 2 6.1 6 good 2 4.6 6 worst 2 10 44 worst international journal of applied sciences and smart technologies volume 3, issue 1, pages 131–142 p-issn 2655-8564, e-issn 2685-9432 138 after it has been removed, it is followed by looking for leaf nodes, and searching for the highest information gain value. the highest information gain value is in the age attribute with a value of 9, thus the leaf node is age, if the age is below 9 then the classification label is good and if it is above 9 then the classification label is worst. the resulting tree is shown in figure 3. figure 3. leaf node 2.6. evaluation several experiments were carried out to evaluate this system. each experiment was carried out by dividing the data into 3, 5, 7 and 9 folds. each experiment was carried out for each category, namely the categories bb/u, tb/u and bb/tb. the experiments are shown in table 7. table 7. c4.5 decision tree experiment experiment number of folds 1st 3-folds 2nd 5-folds 3rd 7-folds 4th 9-folds 3 results and discussion based on the experiments, the system is able to classify the nutritional status of the toddler based on bb/u, tb/u and bb/tb. the test results for the bb/u category showed when the number of 3 folds the measured accuracy was 89.52%, when the international journal of applied sciences and smart technologies volume 3, issue 1, pages 131–142 p-issn 2655-8564, e-issn 2685-9432 139 number of 5 folds the measured accuracy was 90.93%, when the number of 7 folds the measured accuracy was 90.10% and when the number was 9 folds measured accuracy was 90.10%. these results indicate the average level of accuracy is 90.16%. where the greatest accuracy occurs when using 5 folds (table 8). this shows that the system can classify the bb/u category well. table 8. results of the bb / u experiment bb/u experiment number of folds average accuracy (%) 1 3 89.52 2 5 90.93 3 7 90.10 4 9 90.10 while the tb/u category trial showed the average accuracy rate was 76.64% and the highest accuracy occurred at folds 7 (table 9). table 9. results of the tb/u experiment tb/u experiment number of folds average accuracy (%) 1 3 75.27 2 5 75.96 3 7 78.32 4 9 77.03 while the bb/tb category trial showed the average accuracy rate was 83.83% and the highest accuracy occurred at folds 7 (table 10). international journal of applied sciences and smart technologies volume 3, issue 1, pages 131–142 p-issn 2655-8564, e-issn 2685-9432 140 table 10. results of the bb/tb experiment bb/tb experiment number of folds average accuracy (%) 1 3 83.27 2 5 83.27 3 7 84.45 4 9 84.34 based on the test results, we observe that the c4.5 decision tree works well for classifying the categories of bb/u, tb/u and bb/tb using the selected attributes. although a minority of cases cannot be classified properly. 4 conclusion based on the results of the nutritional classification of children under five using the c4.5 decision tree method, the following conclusions can be drawn: 1. the c4.5 decision tree classification method can be used to classify the nutrition of toddlers quite well. 2. the average accuracy for each category is as follows: a. the bb/u category classification has an average accuracy of 90.16%. b. the tb/u category classification has an average accuracy of 76.64%. c. the bb/tb category classification has an average accuracy of 83.83%. references [1] p.t. juniman. “4 ancaman bahaya yang dialami balita dengan gizi buruk” [online]. available: https://www.cnnindonesia.com/gaya-hidup/20180125110614255-271456/4-ancaman-bahaya-yang-dialami-balita-dengan-gizi-buruk, 2008 [2] kemenkes. hasil utama riset kesehatan dasar kementerian kesehatan 2018 [online]. available: https://www.depkes.go.id/resources/download/infoterkini/materi_rakorpop_2018/hasil%20riskesdas%202018.pdf. 2018 international journal of applied sciences and smart technologies volume 3, issue 1, pages 131–142 p-issn 2655-8564, e-issn 2685-9432 141 [3] c. anam and h.b. santoso. “perbandingan kinerja algoritma c4.5 dan naive bayes untuk klasifikasi penerima beasiswa,” jurnal energy, 8 (1), 13–19, 2018. [online]. available: https://ejournal.upm.ac.id/index.php/energy/article/view/111 [4] u. febriana, m.t. furqon, and b. rahayudi. (2017). “klasifikasi penyakit typhoid fever (tf) dan dengue haemorhagic fever (dhf) dengan menerapkan algoritma decision tree c4.5 (studi kasus : rumah sakit wilujeng kediri),” jurnal pengembangan teknlogi informasi dan ilmu komputer, 2 (3), 1275–1282, 2017. [online]. available: https://j-ptiik.ub.ac.id/index.php/j-ptiik/article/view/1124. [5] j. han and m. kamber. data mining: concept and techniques, second edition, morgan kaufmann publishers, 2006. [6] d.t. larose. discovering knowledge in data: an introduction to data mining, john willey & sons, inc., 2005. https://ejournal.upm.ac.id/index.php/energy/article/view/111 international journal of applied sciences and smart technologies volume 3, issue 1, pages 131–142 p-issn 2655-8564, e-issn 2685-9432 142 this page intentionally left blank international journal of applied sciences and smart technologies international journal of applied sciences and smart technologies volume 3, issue 2, pages 171–184 p-issn 2655-8564, e-issn 2685-9432 171 alarm system and emergency message from wheelchair user emergency condition yavez e. loho 1 , diana lestariningsih 1,* , peter r. angka 1 1 department of electrical engineering, faculty of engineering, surabaya widya mandala catholic university, indonesia * corresponding author: diana@ukwms.ac.id (received 12-09-2021; revised 31-12-2021; accepted 31-12-2021) abstract when someone uses a wheelchair, there is still the possibility of an accident to the user, such as when the user suddenly falls down from the wheelchair or the user falls down along with the wheelchair. for notification of emergency conditions for wheelchair users, an alarm system is designed that can send messages to the intended mobile number. the system is designed using wemos d1 mini, ultrasonic, mpu-6050 and proximity e18-d80nk sensors. the conclusion from the measurement and test results are: the value read by the mpu-6050 sensor is taken one axis for each direction when the wheelchair was falling down, for left falling down, for right falling down, for forward falling and for backwards falling down. the ultrasonic sensor works well for detecting the presence of user’s legs and the e18-d80nk proximity sensor works well for detecting the position of the user who is sitting in a wheelchair. receiving notifications through the blynk server works well, not affected by distance provided there is an internet connection connected to the device. keywords: wheelchair, mpu-6050 sensor, ultrasonic sensor, emergency conditions, wemos d1 mini international journal of applied sciences and smart technologies volume 3, issue 2, pages 171–184 p-issn 2655-8564, e-issn 2685-9432 172 1 introduction the use of wheelchairs in society is widely used to help people who have difficulty walking on their feet, either due to injury or disability. this tool is not only available in hospitals but is also sold in general at health outlets in every area. so it is relatively easy for people who have difficulty walking to buy a wheelchair. there are various types of wheelchairs, manual, electric, and sport. however, the most often found in the general public is the manual type because the price is affordable and its use is practical for ordinary people. the manual type wheelchair is moved by being pushed another person or by the user's own hand. in its use, it is often found that the safety of wheelchairs is not perfect. there are still many emergency conditions that occur in wheelchair users, such as when the user falls down from the wheelchair or the user falls along with the wheelchair. the cause of these emergency conditions can occur because the user moves excessively or it can also be due to the condition of the user being tired or weak and the lack of supervision of the user. for notification of the emergency condition of the wheelchair user, a warning device or notification is needed to other people when the wheelchair user at the emergency condition and so they can immediately provide assistance. several literature reviews that have been carried out relating to wheelchair safety systems are smart wheelchairs that can avoid obstacles by using 8 proximity sensors and cameras (yeounggwang ji, 2013). electric wheelchair to avoid obstacles using the hcsr04 sensor where the wheelchair can detect obstacles less than 2m (taizo miyachi et al, 2016). the electric wheelchair safety system from impact uses 6 proximity sensors to detect obstacles (darul, muslimin, 2017). automatic wheelchair with intelligent control mode and wi – fi placement system. a wheelchair specifically for users who have lowlevel visual impairment, by scanning the position of the wheelchair and controlling the direction of the wheelchair remotely (manjunath, gurukiran, 2018). from the reference review, what is being done is to maintain safety while using a wheelchair, whereas in the event of an emergency, a wheelchair user has not planned to report the emergency condition. therefore, the purpose of this study is to design an alarm system for international journal of applied sciences and smart technologies volume 3, issue 2, pages 171–184 p-issn 2655-8564, e-issn 2685-9432 173 notification of emergency conditions for wheelchair users and send messages to telephone numbers that have been stored in advance. the goal to be achieved is to produce a wheelchair alarm system that can provide notifications automatically when wheelchair user at emergency condition. the emergency condition is that the wheelchair user falls along with the wheelchair or the wheelchair user falls down without a wheelchair. so, further action can be taken from the notification recipient or the people around who hear the alarm sound. note that readers are referred to published work [1], [2], [3], [4], [5], [6], [7], [8], and [9]. 2 research methodology this section provides research methodology that we use in this work. 2.1 block diagram figure 1. block diagram figure 1 shows a block diagram of the designed system. the overall working principle of the tool system is governed by the wemos d1 mini as the main microcontroller. in the design of the tool there are 2 pairs of ultrasonic sensors which are placed on the pipe on the right and left side of the wheelchair footrest retainer and international journal of applied sciences and smart technologies volume 3, issue 2, pages 171–184 p-issn 2655-8564, e-issn 2685-9432 174 the proximity e18-d80nk sensor on the right side of wheelchair seat and the mpu6050 sensor on the back side of the wheelchair. ultrasonic and proximity e18-d80nk sensor serves to detect the presence of users sitting in wheelchairs. ultrasonic sensors on the right and left side pipes of the footrest supports are used to detect the user's legs with the soles of the feet in the wheelchair footrest position. when the user is about to get out of the wheelchair, the footrest is automatically opened and the ultrasonic sensor will detect the open footing and the proximity e18-d80nk sensor on the right side of the wheelchair seat will detect there is no wheelchair user. this proves that the user left the wheelchair. the mpu-6050 sensor on the wheelchair functions as a detector of a certain slope value to determine if the wheelchair is in a normal position or falling down. if the user falls along with the wheelchair, the mpu-6050 sensor will detect a change in the predetermined value indicating that the user is in an emergency condition and the wemos d1 mini wifi module will send an emergency message in the form of a notification on blynk server to the target person's cellphone where the cellphone number is previously saved. under certain conditions, wheelchair users may fall forward without a wheelchair. in this case, the ultrasonic sensor will detect that there is no user in a wheelchair so that the wemos d1 mini wifi module will send an emergency notification to the target phone via blynk server. the explanation of the system block diagram is as follows: 1. wemos d1 mini is the main controller that functions to process all data and becomes communication from the device to blynk server to send notifications. 2. the mpu-6050 sensor is used to detect the tilt of the wheelchair when the user on the wheelchair at an emergency condition. 3. ultrasonic sensors on the left and right side of the footrest supports are used to detect the user's legs with the soles of the feet in the wheelchair footrest position 4. the e18-d80nk proximity sensor functions to detect the presence of the user while sitting in a wheelchair. 5. eight-bit lcd display functions to display characters in the form of information on the position of wheelchair users and who can be contacted when wheelchair users are in an emergency condition. international journal of applied sciences and smart technologies volume 3, issue 2, pages 171–184 p-issn 2655-8564, e-issn 2685-9432 175 wemos d1 mini mpu-6050 lcd battery i 18650 6. blynk server is used as an application to receive notifications from wheelchair users at emergency conditions. 2.2 hardware design figure 2 shows the placement of the black box at the back side of the wheelchair. the black box has dimensions of 18 x 11 x 6 cm containing the mpu-6050 sensor, wemos d1 mini, and lcd display. the mpu-6050 sensor is placed in a box that aims to keep at the stable position. figure 2. black box placement at the back side of wheelchair wemos d1 mini as a microcontroller functions to process data received from each sensor used. wemos d1 mini is placed in a box with the aim of preventing circuit damage due to external factors such as accidental collisions. figure 3 shows the position of the component placement. figure 3. components position in black box international journal of applied sciences and smart technologies volume 3, issue 2, pages 171–184 p-issn 2655-8564, e-issn 2685-9432 176 figure 4. placement position of ultrasonic sensor on footrest pipe figure 4 shows the position of ultrasonic sensor on the right and left side pipes of the footrest holder. ultrasound sensors are placed not facing each other to avoid confusing for the expected data reading. figure 5 shows the placement of the e18-d80nk proximity sensor. the sensor is placed on the right side of the wheelchair which is used to detect the presence of the user while sitting in a wheelchair. figure 5. proximity sensor e18-d80nk table 1. shows the relationship between the wemos d1 mini pins and the components used. international journal of applied sciences and smart technologies volume 3, issue 2, pages 171–184 p-issn 2655-8564, e-issn 2685-9432 177 table 1. pins on wenos d1 mini and components wemos d1 mini pin components pin i/0 components a0 proximity e18-d80nk sensor in d0 infrared sensor vcc : infrared sensor d1 lcd, mpu-6050 scl d2 lcd, mpu-6050 sda d3 buzzer out d5 ultrasonik sensor in d6 ultrasonik sensor in d7 ultrasonik sensor in d8 ultrasonik sensor in gnd lcd, infrared sensor, buzzer gnd : lcd, infrared sensor, buzzer 5v lcd vcc 2.3 flowchart the flow chart used to detect the falling down position of wheelchair users with or without wheelchairs can be seen in figure 6. international journal of applied sciences and smart technologies volume 3, issue 2, pages 171–184 p-issn 2655-8564, e-issn 2685-9432 178 figure 6. flowchart figure 6. flowchart starts with initializing variables, lcd, wifi, blynk, pinout and serial components. the next process is to run the blynk, main program and gyro functions. the main program is executed to read the e18-d80nk ir or proximity sensor. if the ir and ultrasonic sensors detect a wheelchair user, the lcd will display the words “patient is fine”. furthermore, if the ir sensor does not detect a sitting user and the ultrasonic does not detect the user's legs, then the buzzer will sound and perform the display number function. the display number function is to send a notification to the blynk server as an emergency condition which will display information telephone numbers that can be contacted on the lcd. international journal of applied sciences and smart technologies volume 3, issue 2, pages 171–184 p-issn 2655-8564, e-issn 2685-9432 179 when the gyro function is run, it will detect the position of the wheelchair when it falls down to the left side, right side, forward side or backward side. if the wheelchair is dropped, the display number function will be executed and the buzzer will sound as a warning sign of an emergency wheelchair user and vice versa if it does not detect the falling down position, the lcd will display that the user is fine. 3 results and discussions 3.1 mpu-6050 sensor test results for falling down conditions with wheelchairs the test is carried out to get a value that can be used as a reference when the wheelchair is in a fallen condition and it is necessary to send a notification alert to blynk. the sensor has an x, y, and z axis that will read the degree value obtained when the wheelchair is dropped. the degree value obtained is recorded and used as a reference for the direction of falling the wheelchair. please see figure 7. (a) (b) (c) (d) figure 7. testing the mpu-6050 sensor on a wheelchair (a) dropped to the left, (b) dropped to the right, (c) dropped forward, (d) dropped backwards international journal of applied sciences and smart technologies volume 3, issue 2, pages 171–184 p-issn 2655-8564, e-issn 2685-9432 180 table 2. reading of the mpu-6050 axis value on a wheelchair position x-axis value y-axis value z-axis value upright 100o 216 o 97 o fall left 135o 181 o 91 o fall right 53o 1 o 89 o fall forward 106o 255 o 148 o fall backward 115o 103 o 26 o from the results of table 2 it can be seen that the mpu-6050 sensor is very sensitive with the readable axis values changing easily. the sensor axis value can be change when the wheelchair moves quickly or there is a shock that affects the sensor. in determining the angle value used as a reference when the wheelchair is in a falling down position, it is taken only one of the axis values with the largest change in each falling down position. that only one axis value is taken as a reference because if you take all the three axes as a reference, there is a possibility that one of the value of the axes will have the same value which will cause confusion in determining the direction of the wheelchair's fall. the axes taken in each falling down condition can be seen in table 3. table 3. reference axis for falling down conditions fall condition axis left right foward backward in accordance with table 3 if the value of the axes is more than or less than the predetermined value, the wheelchair is considered to have fallen and wemos sends an emergency warning. 3.2 the test results of ultrasonic and proximity e18-d80nk sensors for falling down conditions without the wheelchair the test was carried out with a user sitting in a wheelchair and performing an emergency fall without a wheelchair. two ultrasonic sensors on the right and left sides of the footrest and proximity e18-d80nk on the right side of the wheelchair will detect international journal of applied sciences and smart technologies volume 3, issue 2, pages 171–184 p-issn 2655-8564, e-issn 2685-9432 181 the presence or absence of a wheelchair user. the test starts from a wheelchair where there is no one at first time, then the footrest is open and both sensors detect it as a normal condition. both sensors will send an unsafe signal when the ultrasonic sensor detects no user's foot or foot object and the proximity e18-d80nk sensor on the right side also detects no thigh object from the user. the test results can be seen in table 4. table 4. the test results of ultrasonic and proximity e18-d80nk sensors to detect the presence of wheelchair users sensor detect or not detect object wheelchair users condition(*) ultrasonik proximity e18-d80nk detect not detect footrest no wheelchair users ultrasonik proximity e18-d80nk detect detect legs upper thigh wheelchair user detected ultrasonik proximity e18-d80nk not detect not detect the wheelchair user in a falling down position ultrasonik proximity e18-d80nk not detect detect upper thigh wheelchair user detected in fine condition (a) (b) (c) figure 8. testing of ultrasonic and proximity e18-d80nk sensors (a) the sensor detects the presence of the user, (b) the user falls down in front of the wheelchair then the sensor can’t detect the user, (c) wemos sends emergency message figure 8 shows several positions when the sensor detects and does not detect the presence of a wheelchair user. figure 8(a) shows an active ultrasonic sensor blocked by the legs of a wheelchair user and an active proximity sensor blocked by a seated user. figure 8(b) shows the user falling towards the front of the wheelchair and the wheelchair position does not change. in moments later the ultrasonic sensor detects that international journal of applied sciences and smart technologies volume 3, issue 2, pages 171–184 p-issn 2655-8564, e-issn 2685-9432 182 the wheelchair user's legs are not blocked and the proximity e18-d80nk sensor is not blocked by the wheelchair user's thigh. figure 8(d) shows the system detects the user's emergency situation and sends a message to the previously saved mobile number. from the test results above, it can be concluded that the ultrasonic sensor and proximity e18d80nk sensors function properly when detecting or not detecting the presence of wheelchair users. 3.3 notification delivery test results the test aims to find out how far the recipient of the emergency warning can receive notification of an emergency message from the location of the wheelchair user. notifications are sent by the wemos d1 mini wifi module and utilize the blynk application platform as a recipient of emergency warnings for wheelchair users. testing is done by turning on blynk to receive notifications. to test the mileage for sending notifications, it is carried out as follows: the device that receives the notification of an emergency is placed in a predetermined location, then a test of receiving the notification of an emergency is carried out. the wheelchair was placed on the kalijudan campus of widya mandala catholic university surabaya, which is located in tambaksari district in the east surabaya area. the results of sending and receiving notifications can be seen in table 5. table 5. testing the mileage of notification delivery no distance from wheelchair location (km) district or location area message sent or not sent 1. 8,5 km sukolilo east surabaya sent 2. 10 km pabean cantian north surabaya sent 3. 13 km gayungan south surabaya sent 4. 19 km lakarsantri surabaya barat sent 5. 26,1 km kota gresik sent 6. 28 km icon mall gresik gresik sent 7. 32 km plaza sidoarjo sidoarjo city sent 8. 34,8 km sidoarjo fishing sidoarjo sent 9. 37 km rest area at toll road mojokerto mojokerto sent 10. 48 km tjiwi kimia paper mills tol road mojokerto mojokerto sent international journal of applied sciences and smart technologies volume 3, issue 2, pages 171–184 p-issn 2655-8564, e-issn 2685-9432 183 from the above test, it can be concluded that the user can receive emergency notifications in all places with the condition that the device must be connected to the blynk server and connected to the internet. 4 conclusion the conclusion from the measurement and testing results are as follows: a. the value read by the mpu-6050 sensor is taken one axis for each direction when the wheelchair was falling down, y≤180 o for left falling down, x≤50 o for right falling down, z≤65 o for forward falling down and z≥140 o for backwards falling down. b. the ultrasonic sensor works well for detecting the presence of user’s legs and the e18-d80nk proximity sensor works well for detecting the position of the user who is sitting in a wheelchair. c. receiving notifications through the blynk server works well, not affected by distance provided there is an internet connection connected to the device. references [1] d. m. putri, “mengenal wemos d1 mini dalam dunia iot,” ilmu teknologi dan informasi, 1, 2–4, 2017. [2] baktikominfo, pengertian, fungsi dan kelebihan accelerometer yang tak banyak orang ketahui, https://www.baktikominfo.id/id/informasi/pengetahuan/pengertian_fungsi_dan_kel ebihan_accelerometer_yang_tak_banyak_orang_ketahui-785 (accessed: 22/11/2020). [3] r. t. asnada dan sulistyono, “pengaruh inertial measurement unit (imu) mpu 6050 3-axis gyro dan 3-axis accelerometer pada sistem penstabil kamera (gimbal) untuk aplikasi videografi.” 11 (1), 48–55, 2020. [4] f. n. riyadi, “perancangan pendeteksi banjir menggunakan sensor water level berbasis plc schneider tm221ce16r dan sms gateaway.” 26–27, 2018. [5] nyebarilmu, mengenal aplikasi blynk untuk fungsi iot, https://www.nyebar ilmu.com/mengenal-aplikasi-blynk-untuk-fungsi-iot/ (accessed: 10/11/2020) international journal of applied sciences and smart technologies volume 3, issue 2, pages 171–184 p-issn 2655-8564, e-issn 2685-9432 184 [6] s. m. sari, “aplikasi sensor ultrasonik srf04 dan sensor proximity pada level pengisian tangki air berbasis atmega8535,” 25–26, 2015. [7] muslimin dan darul, “sistem pengaman kursi roda elektrik dari benturan melalui evaluasi sensor jarak.” https://repository.its.ac.id/id/eprint/46208, 2017. [8] y. ji, j. hwang, and e. y. kim, “an intelligent wheelchair using situation awareness and obstacle detection,” procedia, social and behavioral sciences, 97, 620–628, 2013. [9] miachi, “a study of aware wheelchair with sensor networks for avoiding two meters danger.” procedia, 1004–1010, 2016. international journal of applied sciences and smart technologies international journal of applied sciences and smart technologies volume 2, issue 1, pages 9–22 p-issn 2655-8564, e-issn 2685-9432 9 pricing the financial heston model using parallel finite difference method on gpu cuda pranowo department of informatics engineering, faculty of industrial technology, universitas atma jaya yogyakarta, indonesia corresponding author: pranowo@uajy.ac.id (received 29-04-2019; revised 17-10-2019; accepted 17-10-2019) abstract an option is a financial instrument in which two parties agree to exchange assets at a price or strike and the date or maturity is predetermined. options can provide investors with information to set strategies so they can increase profits and reduce risk. option prices need to be accurately evaluated according to reality and quickly so that the resulting value can be utilized at the best momentum. valuation of option prices can use the heston equation model which has advantages compared to other equation models because the assumption of volatility is not constant with time or stochastic volatility. the volatility that is not constant with time corresponds to reality because the underlying asset as a basis can experience fluctuations. the heston equation has a disadvantage because it is a derivative equation that is difficult to solve. one way to solve derivative equations easily is to use a numerical solution to the finite difference method of non-uniform grids because the heston equation can be assumed to be a parabolic equation. the numerical solution of the finite difference method can solve derivative equations flexibly and do not require matrix processing. but it requires a heavy and slow computing process because there are many elements of international journal of applied sciences and smart technologies volume 2, issue 1, pages 9–22 p-issn 2655-8564, e-issn 2685-9432 10 calculation and iteration. this study proposes a numerical solution to the finite difference method by using the compute unified device architecture (cuda) parallel programming to solve the heston equation model that applies the concept of stochastic volatility to get accurate and fast results. the results of this research proved 15.52 times faster in conducting parallel computing processes with error of 0.0016.. keywords: option price, heston model, finite difference, parallel, gpu cuda. 1 introduction options provide investors with information to set strategies so they can increase profits and reduce risk. valuation of option prices can be assessed using the heston equation model that applies stochastic volatility, which means that something is determined randomly and may not be accurately predicted. research related to numerical solution of option price using heston model been done previously which can be seen in table 1. researchers indicate that previous research was limited to solving heston equations using numerical solutions for option prices and had not been implemented in parallel computing so that this study discussed the numerical solutions of finite methods difference non-uniform grids to solve heston equations in parallel computing. the computational process of the finite difference method of non-uniform grids will increase as the number of grids increases. heavy computing process is an obstacle in using many grids to improve the accuracy of results. at first, the computer had only one central processing unit (cpu) or called the uniprocessor architecture for computational processing. today computers evolve into multicore architectures that support processing in parallel. parallel processing can be done with parallel programming, namely programming that focuses on solving problems simultaneously using fully using the computational power of computer architecture [1]. these problems can be solved by computational processing using parallel programming that utilizes the graphics processor unit (gpu) with the compute unified device architecture (cuda) international journal of applied sciences and smart technologies volume 2, issue 1, pages 9–22 p-issn 2655-8564, e-issn 2685-9432 11 programming model. gpu consists of a set of cpu’s that perform computational processes in parallel so that it can work on many computational processes simultaneously. cuda programming model is an application programming model that utilizes gpu as the core computational process. the solution to the numerical problem of derivative equations in the heston model using the finite difference method that utilizes cuda programming model-based parallel programming is expected to determine accurate option values with fast computational performance. table 1. previous research research purpose diamond–cell finite volume scheme for the heston model [7] propose a new numerical scheme to solve partial equations that appear in the heston stochastic volatility model. stability of central finite difference schemes for the heston pde [8] measuring stability limits is useful for time discretization methods in numerical solutions of heston partial differential equations that stand out from mathematical finance. pricing european options with proportional transaction costs and stochastic volatility using a penalty approach and a finite volume scheme [9] establishing european standard option pricing values based on proportional and stochastic volatility transaction cost using the penalty approach method and finite volume scheme. numerical methods to solve pde models for pricing business companies in different regimes and implementation in gpus [10] solving the problem of corporate valuation models using a numerical approach to the finite difference method developed with parallelization using gpu technology. pricing of early-exercise asian options under lévy processes based on fourier cosine expansions [11] set prices for asian options with initial training features based on two-dimensional integration and backward recursion from fourier coefficients in several numerical techniques implemented on the gpu. 2 theory 2.1. option price an option is a financial instrument in which two parties agree to exchange assets at a price or strike and the date or maturity is predetermined [2]. by paying in advance, international journal of applied sciences and smart technologies volume 2, issue 1, pages 9–22 p-issn 2655-8564, e-issn 2685-9432 12 known as price or premium from options, contract holders have the right, but not the obligation, to buy or sell assets at the time of maturity [3]. for example, the european option model has rules that can only be exercised at maturity. the value of the option is based on the derivative value of the underlying asset, so the option is derivative. based on this, the option contract is one of "derivative security" [4]. the underlying asset value has a property proportional to the value of the call option and the property is inversely proportional to the value of puts option. the value of up option calls if the value of the underlying asset rises and vice versa. the value puts down if the underlying asset rises and vice versa. 2.2. finite difference for heston pde the finite difference method has the idea of discretizing domains with several grid points and using the finite difference to estimate derivatives at these grids [5]. the finite difference method assumes that the model grids can be structured or unstructured. the finite difference method is a technique to get numerical estimates from pde. to be able to implement finite difference to solve heston pde, it is necessary to discretize grids for the stock price and variance variables and discretize grids for maturity. this research uses non-uniform grids to discretize grids. non-uniform grids have irregular grid distances between the two variables used. non-uniform grids can be refined at certain points so that accurate price valuations can be produced with accurate prices using a few grid points. the variables used to form grids are , , and . it is necessary to determine the maximum value and the minimum value of s, v, and t as the value limit. the maximum value is denoted as , , and . the values of and are obtained based on the calculated option case, while based on the maturity time. the minimum value is denoted as , , and . the minimum value will always be set to as the lower limit [6]. international journal of applied sciences and smart technologies volume 2, issue 1, pages 9–22 p-issn 2655-8564, e-issn 2685-9432 13 the grid size is set with point for the stock price, point for volatility, and point for maturity. the width of non-uniform grids for stock price is arranged by equation ( ) (1) the width of non-uniform grids for volatility is arranged by equation ( ) (2) the width of non-uniform grids for volatility is arranged by equation (3) this heston pde model estimates the point values in the interior and boundary sections separately. the interior part ( ) is estimated by using first-order derivatives with a central difference. the boundary section is governed by certain conditions. the boundary section has several conditions that need to be initialized, i.e. the conditions at maturity, , , and . the boundary conditions at maturity, , the value of the call option is the intrinsic value (payoff) so that the equation is obtained. ( ) ( ) (4) with a limit and . the boundary condition when , the call option becomes useless. because that equation is obtained ( ) (5) with a limit of and . the boundary condition when , the equation used is ( ) (6) with a limit of and . the boundary condition when , the equation used is ( ) (7) with a limit of and . international journal of applied sciences and smart technologies volume 2, issue 1, pages 9–22 p-issn 2655-8564, e-issn 2685-9432 14 the boundary conditions when , the equation used is , namely which is solved using the central difference and which are resolved using forward difference. the equation formed is ( ) ( ) (8) explicit schemes will be used as a technique to solve the finite difference. the equation used to obtain the elements is [ ( ) ( ) ] (9) 2.3. compute unified device architecture (cuda) programming structure the cuda programming model can execute applications on heterogeneous computing systems by only annotating code with a set of extensions to the c programming language. nvidia can be used to allocate the right host memory (cpu) and device (gpu) so that applications can be optimized and maximize the use of hardware [1]. the structure of the cuda application process can be seen in figure 1. figure 1. cuda programming structure cuda which consists of serial code is run on the host, while parallel code is run on the gpu device. host code is written in ansi c and device code is written using cuda c. all code can be placed in a single source file or can use multiple source files international journal of applied sciences and smart technologies volume 2, issue 1, pages 9–22 p-issn 2655-8564, e-issn 2685-9432 15 to build the desired application or library. codes that have been created for hosts and devices can be run using nvidia c compiler (nvcc). 3 algorithm parallel programming is a programming algorithm that forms a program that is capable of working on several processes in parallel utilizing multiple processors. in programming, cuda uses simt (single instruction, multiple threads) execution models that are similar to simd (single instruction, multiple data) execution models for general data parallel programming [10]. the cuda code execution unit, the kernel, executes simultaneously a set of threads in each block freely. each thread will run one processor simultaneously on the same but different data instructions. figure 2 describes the cpu and gpu algorithms. based on the flowchart above, it can be seen that the gpu algorithm can simplify the cpu algorithm so that it is not complex, where simple processes such as temporary grid updates, u, can be done simultaneously with boundary initialization. therefore the gpu algorithm is simpler and not much repetitive. repetition is only done to do time iterations. the 2-d matrix used is changed to , because gpus have different matrix index concepts. the cpu index is a row of columns, and an index on the gpu in the form of columns rows, gpu uses column row index because it adjusts the hierarchy of blocks and threads. changing the index to makes the matrix index can be adjusted according to the indexing formula (( ) ) in order to meet the concept of matrix cpu and gpu. in addition, memory allocation is only done in the matrix pointer to allocate memory pointers on the device, gpu and copy values from the host, cpu to device, gpu. another parameter that is not a pointer can be used directly across hosts and devices. international journal of applied sciences and smart technologies volume 2, issue 1, pages 9–22 p-issn 2655-8564, e-issn 2685-9432 16 figure 2. flowchart of cpu (left) and gpu (right) algorithms international journal of applied sciences and smart technologies volume 2, issue 1, pages 9–22 p-issn 2655-8564, e-issn 2685-9432 17 finite difference non-uniform grids numerical solutions require complex and many computational processes, so that an increase in the number of relevant grids can be used to measure computational performance. this study conducted a numerical experiment by increasing the number of phased grids to see the difference between the performance of the gpu algorithm and the cpu algorithm. experiments have been carried out on stand-alone computers with intel core i7 which has cores with clock, ram, and nvidia geforce gtx gpu which has processors and gddr5x. the cuda version installed is . 4 results and discussions the implementation of the gpu algorithm to solve the equations of the heston model using the finite difference method non-uniform grids needs to be verified. verification is done by conducting numerical experiments to see the convergence of numerical finite difference non-uniform grids with exact values, along with the increase in the number of grid points for stock prices and volatility. the parameter used for this numerical experiment is ; ; ; ;  and    the combination of the number of grid points for the stock price and the volatility used varies. the size of the grids is formed by following the condition that finer grids approach the strike price and around the point . the number of grids for the stock price, , has a range of values from to , with increases. the number of grids for the stock price, , has a range of values from to , with 5 increases. this numerical experiment will be an iterated as much as as the number of time points. the maximum grid combination in this experiment is and . non-uniforms grids can be seen in figure 3. international journal of applied sciences and smart technologies volume 2, issue 1, pages 9–22 p-issn 2655-8564, e-issn 2685-9432 18 figure 3. non-uniform grids figure 4. surface prices use finite difference non-uniform grids and the exact value of the stock price, and volatility, with the exact value of the option price using the closed form solution of the heston model is . the results of finite difference non-uniform grids with a combination of up to grids and can be seen in table 2 resulting in a value of 4.2801 using the gpu algorithm and 4.2805 using the cpu algorithm. experiments were also carried out by increasing grid and to reach the limit before instability was achieved. the results obtained show grids and are the limits before instability is achieved. international journal of applied sciences and smart technologies volume 2, issue 1, pages 9–22 p-issn 2655-8564, e-issn 2685-9432 19 table 2. relative error numerical finite difference non-uniform grids cpu and gpu solutions cpu gpu price error price error 80 20 4.2767 -0.0016 4.2760 -0.0023 90 25 4.2868 0.0085 4.2864 0.0081 100 30 4.2811 0.0028 4.2807 0.0024 110 35 4.2797 0.0014 4.2792 0.0009 120 40 4.2814 0.0031 4.2810 0.0027 130 45 4.2808 0.0025 4.2804 0.0021 140 50 4.2812 0.0029 4.2808 0.0025 150 55 4.2819 0.0036 4.2814 0.0031 160 60 4.2796 0.0013 4.2792 0.0009 170 65 4.2800 0.0017 4.2796 0.0013 180 70 4.2813 0.0030 4.2809 0.0026 190 75 4.2805 0.0022 4.2801 0.0018 190 150 4.2705 -0.0078 4.2799 0.0016 the error is obtained by calculating the difference in the option price of the numerical result with the exact value. the error in the experimental results has a variety of values, where at each increase in the number of grids, the error does not always decrease. if we look further, the overall error continues to decrease as grid size increases. in the maximum grid combination and , the smallest is obtained obtained at . so that it can be ascertained that increasing the number of grid points will increase accuracy. table 3 shows the gpu algorithm can produce values that are closer to the exact values and are more accurate when grids are enlarged. enlargement grids also run faster when processed using the gpu, compared to when processed using cpu results per exact price, cpu numeric, and numerical gpu. the price-end-result using non-uniform finite difference grids and after an iteration of is shown in figure 4. comparison of gpu algorithm performance with cpu algorithm was done by conducting numerical experiments by increasing the number of phased grids can be seen in table 3. international journal of applied sciences and smart technologies volume 2, issue 1, pages 9–22 p-issn 2655-8564, e-issn 2685-9432 20 tabel 3. performance: cpu vs gpu with 3000 time steps ( ) and various grids grid points cpu time (s) gpu time (s) speedup (times) speedup (s) 80 20 1600 0.243 0.176 1.38x 0.067 90 25 2250 0.348 0.171 2.04x 0.177 100 30 3000 0.467 0.172 2.72x 0.295 110 35 3850 0.607 0.171 3.55x 0.436 120 40 4800 0.766 0.179 4.28x 0.587 130 45 5850 0.921 0.179 5.15x 0.742 140 50 7000 1.111 0.181 6.14x 0.93 150 55 8250 1.321 0.176 7.51x 1.145 160 60 9600 1.547 0.181 8.55x 1.366 170 65 11050 1.793 0.186 9.64x 1.607 180 70 12600 2.033 0.182 11.17x 1.851 190 75 14250 2.323 0.193 12.04x 2.13 190 150 28500 4.765 0.307 15.52x 4.458 based on the experimental results, stable gpu performance is always superior to the cpu. in finer grids as they approach the k strike price and around the point , the grid sizes of and are increases and gradually, gpu performance continues to increase faster. experiments were also carried out by increasing grids to reach the limit before instability was achieved. the results are obtained on the grids and where computing performance reaches faster. the bigger the grid, the cpu performance will decrease while the gpu performance is stable. 5 conclusions this study aims to solve the equations of the heston model using numerical solutions with finite difference non-uniform grids based on the compute unified device architecture (cuda) parallel programming to get accurate and fast results. based on this research, finite difference non-uniform grids with gpu algorithms can produce values that approach exact values and are more accurate when grids are enlarged. the error in the experimental results continues to decrease every time is increased by 10 points, and is increased by 5 points. the results of the finite difference noninternational journal of applied sciences and smart technologies volume 2, issue 1, pages 9–22 p-issn 2655-8564, e-issn 2685-9432 21 uniform grids numerical solution with a maximum combination of grids and produce a value of with an error of , compared to the combination value of internal grids and that produces a value of 4.2760 with an error of . in the stability experiment with a combination of grids and , the error obtained descreases to . this proves that increasing the number of grid points will increase accuracy. enlargement grids also run faster when processed with the gpu. the computational process is faster times in the combination of initial grids and and continues to increase until it has an acceleration of faster on the grid and . in the stability experiment with a combination of grids and , computing performance reaches faster. acknowledgements this research was supported in part by universitas atma jaya yogyakarta (uajy). this research was conducted using the gpu computers in the graduate program of informatics engineering uajy. references [1] p. kutik and k. mikula, “diamond–cell finite volume scheme for the heston model,” discrete & continuous dynamical systems, 8 (5), 913 931, 2015. [2] k. j. in’t hout and k. volders, “stability of central finite difference schemes for the heston pde,” numerical algorithms, 60 (1), 115–133, 2012. [3] w. li and s. wang , “pricing european options with proportional transaction costs”, computers and mathematics with applications,73 (11), 2454 2469, 2017. [4] d. castillo, a. m. ferreiro, j. a. garcía-rodríguez, and c. vázquez,” numerical methods to solve pde models for pricing business companies in different regimes and implementation in gpus,” applied mathematics and computation, 219 (24), 11233 11257, 2013. international journal of applied sciences and smart technologies volume 2, issue 1, pages 9–22 p-issn 2655-8564, e-issn 2685-9432 22 [5] b. zhang and c. w. oosterlee, “pricing of early-exercise asian options under lévy processes based on fourier cosine expansions,” applied numerical mathematics, 78, 14 30, 2014. [6] j. cheng, m. grossman, and t. mckercher, professional cuda c programming, john wiley & sons, indianapolis, 2014. [7] e. tandelilin, portofolio dan investasi: teori dan aplikasi, kanisius, yogyakarta, 2010. [8] g. i. ramírez-espinoza, “conservative and finite volume methods for the pricing problem,” master thesis, faculty of mathematics and natural science, bergische universität wuppertal, wuppertal, 2011. [9] t. f. crack, basic black-scholes: option pricing and trading, 2009. [10] y. chen, “numerical methods for pricing multi-asset options,” master thesis, graduate department of computer science, university of toronto, toronto, 2017. [11] f. d. rouah, the heston model and its extensions in matlab and c#, john wiley & sons, hoboken, 2013. international journal of applied sciences and smart technologies international journal of applied sciences and smart technologies volume 3, issue 2, pages 257–264 p-issn 2655-8564, e-issn 2685-9432 257 writer identification based on hand writing using artificial neural network rosalia arum kumalasanti 1,* 1 department of informatics, faculty of science and technology, sanata dharma university, yogyakarta, indonesia * corresponding author: rosaliasanti@usd.ac.id (received 26-11-2021; revised 30-12-2021; accepted 31-12-2021) abstract humans are social beings who depend on social interaction. social interaction that is often used is communication. communication is one of the bridges to connect social relations between humans. communication can be delivered in two ways, namely verbal or nonverbal. handwriting is an example of nonverbal communication using paper and writing utensils. each individual's writing has its own uniqueness so that handwriting often becomes the character or characteristic of the author. the handwriting pattern usually becomes a character for the writer so that people who recognize the writing will easily guess the ownership of the related handwriting. however, handwriting is often used by irresponsible people in the form of handwriting falsification. the acts of writing falcification often occur in the workplace or even in the field of education. this is one of the driving factors for creating a reliable system in tracking someone's handwriting based on their ownership. in this study, we will discuss the identification of a person's handwriting based on their ownership. the output of this research is in the form of id from the author and accuracy in the form of percentage of system reliability in identifying. the results of this study are expected to have a good impact international journal of applied sciences and smart technologies volume 3, issue 2, pages 257–264 p-issn 2655-8564, e-issn 2685-9432 258 on all parties, in order to minimize plagiarism. identification of handwriting to be built consists of two main processes, namely the training phase and the testing phase. at the training stage, the handwritten image is subjected to several processes, namely threshold, wavelet conversion, and then will be trained using the backpropagation artificial neural network. in the testing phase, the process is the same as in the training phase, but at the end of the process, a comparison will be made between the image data that has been stored during training with a comparison image. backpropagation ann can work optimally if it is trained using input data that has determined the size, learning rate, parameters, and the number of nodes on the network. it is expected that the offered method can work optimally so that it produces an accurate percentage in order to minimize handwriting falcification. keywords: handwriting, artificial neural networks, backpropagation 1 introduction the use of communication in the digital era is growing and becomes an important need in various aspects of life. communication is the essence of humans as social beings in interacting with each other. in general, communication is defined as the process of delivering and receiving messages between two or more people directly or indirectly. in short, it can be concluded that the purpose of communication is to create understanding between two or more parties. one example of how to communicate nonverbally is by writing. handwriting is still often used for various purposes, such as writing documents or forms, writing letters, and also for signatures. everyone's handwriting has different characteristics and patterns so that handwriting is considered a unique attribute that is owned by each person or also known as a biometric attribute. the uniqueness of handwriting allows people to be able to distinguish one writing from another according to the characteristics of the pattern. handwriting is often considered a trivial thing so many people think that the validity of handwriting is not too important. this causes an increasing number of plagiarism that international journal of applied sciences and smart technologies volume 3, issue 2, pages 257–264 p-issn 2655-8564, e-issn 2685-9432 259 occurs, especially in the work environment and even in the educational environment. visually or with the naked eye, humans cannot directly distinguish and identify the owner of the handwriting. even if humans know that there are differences in the handwriting, they have difficulty in identifying the owner of the handwriting. this is an example of a person's lack of concern for handwriting ownership. this study aims to protect handwriting as a biometric attribute that is protected by ownership. the handwriting identification system will be built using backpropagation ann and supporting parameters. a reliable system is expected to be a solution in minimizing handwriting forgery and providing awareness to all people about the importance of handwritten characters. ann works like the human brain in identifying an object by studying patterns and storing characteristics for comparison with other objects. the more characteristics that are stored for study, the smarter the ann will be in identifying. 2 research methodology writing is one of the ways humans communicate with the aim of conveying information using written media. although the digital era has developed, handwriting is still often used for formal and non-formal activities. handwriting has become one of the biometric attributes that offers several methods that can be developed in research. most handwriting identification uses ann and several other supporting algorithms ann is used to perform initial analysis and categorize handwritten input images [1]. each individual's writing has its own characteristics that can make a difference. 2.1. biometrics and character of a person. biometrics is a science and innovation to describe information from the human body naturally [2]. biometrics refers to individual differences based on physiological or social attributes. the biometric value itself can be an individual's physiological or social attributes of completeness. a person's handwriting has its own characteristics related to a person's character. graphology is the science of analyzing handwriting and the characteristics of a person's personality through the extraction of features based on shape [3]. the identification of the four international journal of applied sciences and smart technologies volume 3, issue 2, pages 257–264 p-issn 2655-8564, e-issn 2685-9432 260 dominant forms of handwriting is divided into several attributes, as shown in table 1 below. 2.2. jst backpropagation. artificial neural networks (ann) refers to the paradigm of information processing or computing systems inspired by biological neural networks in the human brain. the system is not biologically identical to the nervous system, but is designed to process information in the same way that the human brain processes information. the network consists of many neurons that are interconnected and work simultaneously to achieve certain goals. just like the human brain, ann learns from the object examples presented, so ann can be configured for an application. the application in question is like data classification or character recognition through the learning process. figure 1 shows the architecture of ann figure 1. artificial neural network architecture [4] backpropagation is one of the ann models that has the ability to get a balance between the network's ability to recognize the patterns used during training and the ability to provide the correct response to input patterns that are similar to the patterns used during training. backporpagation algorithm is an algorithm that can find the minimum error or error in the weights by using gradient descent. the combination of weights that can minimize errors is considered capable of providing solutions to the learning process. backpropagation has several neurons at each network layer. wavelet transform preprocessing in this study utilizes wavelet switching to represent the time and shape frequency signal. wavelet transform is the basis of mathematical tools on several transfer layer functions and produces coefficients that international journal of applied sciences and smart technologies volume 3, issue 2, pages 257–264 p-issn 2655-8564, e-issn 2685-9432 261 represent signal characteristics [5]. wavelets provide accuracy and analysis of signals of more than one resolution, which is known as multi-resolution capability. the advantage of this multi-resolution analysis is that features that may not be detected at one resolution can be detected using another resolution [6]. discrete wavelet transform is the choice as an efficient way to be used on images in the form of discrete data. wavelet has a decomposition level that is useful for obtaining optimal information on an image. figure 2 below is a level 2 decomposition image. figure 2. decomposition level 2 [7] 3 results and discussion identification of handwritten images consists of two phases, namely training and testing. a number of samples in the form of handwritten images are used as input data. the greater the amount of input data, the better for ann in studying patterns, but it is also necessary to consider the amount of time needed for ann to study it. this identification aims to build an effective and efficient system for users to obtain the identity of the author. image samples with a certain size also need to be uniform so that the system can easily get the characters in each image with optimal size and results. figure 3 below is a handwriting identification flow using backpropagation ann. international journal of applied sciences and smart technologies volume 3, issue 2, pages 257–264 p-issn 2655-8564, e-issn 2685-9432 262 figure 3. handwriting identification flowchart the handwritten image that will be sampled must first be converted into a digital image with the help of a scanner so that a digital image is obtained that is ready to be processed. preprocessing is the initial step that will be given to the image, including the threshold and then it will be subjected to a wavelet transform to get important information in the image. after that the image will be normalized before entering the training phase. the image training phase is carried out on each id / author in order to get the character and pattern of the handwritten image. the results of the signature character in the form of weights generated by backpropagation ann are then stored in the data store to then be compared with the test image. the test image can be taken from the same author with different raw images, so the test image will also go through the same preprocessing. the system is considered successful if the system can provide appropriate results. figure 4 below represents the training phase on each id/author. international journal of applied sciences and smart technologies volume 3, issue 2, pages 257–264 p-issn 2655-8564, e-issn 2685-9432 263 figure 4. training phase the system is considered reliable and successful if it can learn the character of each handwriting pattern so as to be able to distinguish the author's handwritten characters from one another. the indicator of the success of the system will be seen from the percentage of identification ability in all the ids studied. of course, here it is possible for errors to occur, so handwriting identification needs to be simulated using several parameters to get the optimal percentage. the consistency of a person's handwriting is also one of the influential factors so that the identification of this handwriting can also be added to several attributes to get more optimal results. 4 conclusion this study presents a method to identify handwriting using a backpropagation neural network. ann can be used in high complexity and is able to provide a progressive approach that is verified by various error rates. utilization of the backpropagation algorithm can minimize errors because there is a reverse/backward checking phase to correct errors. it is hoped that the design of this handwriting identification system can help minimize plagiarism among the public in all aspects. as international journal of applied sciences and smart technologies volume 3, issue 2, pages 257–264 p-issn 2655-8564, e-issn 2685-9432 264 for things that can be added from this research in the form of real-time identification that can provide direct results so that it is more effective and efficient. references [1] m. sutajha, m. sandeep, c. aiswarya and b. mounika, "hand writing recognition system based on neural network," international journal on innovative technology and exploring engineering (ijitee), 9 (1), 4977–4980, 2019. [2] r. ahuja and l. duhan, "optimized multi model biometric based human authentication using deep neural network," international journal of recent technology and engineering (ijrte), 8 (3), 280–290, 2019. [3] p. b. nair, a. m. johnson, a. m. alex and a. sebastian, "android app for handwriting analysis using deep learning," journal of communication engineering and its innovations, 5 (3), 16–21, 2019. [4] s. aqab and m. u. tariq, "handwriting recognition using artificial intelligence neural network and image processing," international journal of advanced computer and application (ijacsa), 11 (7), 137–146, 2020. [5] p. g. patil and r. s. hegadi, "offline handwritten signature classification using wavelet and support vector machines," international journal of engineering science and innovative technology (ijeat), 2 (3), 573–589, 2013. [6] p. divyasri, k. depti and d. s. rao, "signature analysis of centrifugal fan response due to unbalance using wavelet analysis," international journal of advance research in science and engineering, 3 (7), 205–213, 2014. [7] suma'inna, "detection of cardiac abnormalities based on ecg pattern recognition using wavelet and artificial neural network," far east journal of mathematical sciences, 76 (1), 111–122, 2013. international journal of applied sciences and smart technologies international journal of applied sciences and smart technologies volume 3, issue 2, pages 215–224 p-issn 2655-8564, e-issn 2685-9432 215 heat transfer characteristic on wing pairs vortex generator using 3d simulation of computational fluid dynamic petrus setyo prabowo 1 , stefan mardikus 2,* , ewaldus credo eukharisto 2 1 department of electrical engineering, sanata dharma university, paingan, maguwoharjo, depok, sleman, yogyakarta 55282, indonesia 2 department of mechanical engineering, sanata dharma university, paingan, maguwoharjo, depok, sleman, yogyakarta 55282, indonesia * corresponding author: stefan@usd.ac.id (received 08-11-2021; revised 07-12-2021; accepted 07-12-2021) abstract vortex generators are addition surface that can increase heat transfer area and change the fluid flow characteristics of the working fluid to increase heat transfer coefficient. the use of vortex generators produces longitudinal vortices that can increase the heat transfer performance because of the low pressure behind vortex generators. this investigation used delta winglet vortex generator that was combined with rectangular vortex generator to reynold numbers ranging 6.000 to 10.000. the parameters of nusselt number, friction factor, velocity vector and temperature distribution will be evaluated. keywords: parallelogram winglet vortex generator, heat exchanger, heat transfer performance international journal of applied sciences and smart technologies volume 3, issue 2, pages 215–224 p-issn 2655-8564, e-issn 2685-9432 216 1 introduction heat exchanger is one of various important components in the industries. chemical industry, power plants, food factories hospitals, and super computers are using heat exchanger for their daily operation. heat exchanger is used to transfer heat such as cooling system in super computers, boilers in power plants, evaporators to dry food at food factories, hot and cold piping system in hospitals and separator in chemical industries. one type of heat exchangers that is used most in the operation of those industries is circular tube heat exchanger. circular tube heat exchanger performance is affected by some parameters. heat transfer area and fluid flow characteristics are two parameters that have great effect on heat exchanger performance. the wider the heat transfer area, the higher the heat transfer rate of heat exchanger. the addition of heat transfer area can make some adjustments to the flow characteristics of the working fluid based on the method. the method that can increase heat transfer area and change the fluid flow characteristics is the use of vortex generators. vortex generators are added surface that can increase heat transfer area and change the fluid flow characteristics of the working fluid to increase heat transfer performance. the use of vortex generators produces longitudinal vortices that can increase the heat transfer performance. longitudinal vortices shapes because of the low pressure behind vortex generators. low pressure region makes the working fluid in the middle of the flow changes its course to the side and from the side to the middle. previous researches show that the use of vortex generators can increase heat transfer performance [1]. deshmukh and vedula, 2014 shows that circular tube heat exchanger using curved delta winglet vortex generator in the inner side has higher heat transfer performance compared with plain circular tube. using the variations of reynolds number ranging from 10.000 to 45.000, the use of curved delta winglet vortex generator can increase nusselt number from 1.3 to 5.0 [2]. islam and kharoua, 2018 studied thermal performance and flow behaviour of winglet vortex generators in circular tube using experimental method. attack angle of 0°, 15°, 30°, 45°, blockage ratio of 0.1, 0.2, 0.3, row values of 4, 8, 12, and relative pitch ratios of 4.8, 2.4, 1.6 were investigated using international journal of applied sciences and smart technologies volume 3, issue 2, pages 215–224 p-issn 2655-8564, e-issn 2685-9432 217 reynolds number of 6.000 to 33.000. the result shows the decrease of nusselt number with pitch row but increase with attack angle and blockage ratio [3]. liu et al., 2018 numerically and experimentally studied heat transfer performance on circular tube heat exchanger enhanced with rectangular winglet vortex generators [4]. reynolds number ranging from 5.000 to 17.000 was used to investigate the effect of rectangular winglet vortex generators in circular tube heat exchanger using slant angle variations of 10°, 20°, 30°, and 35°. the result shows the increase of nussselt number from 1.15 to 2.49 and friction factor from 2.09 to 12.32. modhi and rathod, 2019 compared the use of sinusoidal wavy and elliptical curved rectangular winglet vortex generators using 2d numerical method. reynolds number of 400 to 1000 was used to study heat transfer enhancement and pressure drop varying in wavy-up, wavy-down, curved-up, and curved-down with plain rectangular winglet vortex generators as the baseline. the result shows the increase of heat transfer performance with moderate pressure drop [5]. zhai et al., 2019 did a research of heat transfer augmentation on circular tube using delta winglet pairs vortex generator. reynold number ranging from 5.000 to 25.000 was used to study heat transfer enhancement characteristics using the variations angle of attack 10°, 20°, 30°, 40°, height 5 mm, 7.5 mm, 10 mm, and space 10 mm, 15 mm, 20 mm. experiment result shows the application of delta winglet pairs vortex generators can increase nusselt number up to 75% compare with plain circular tube [6]. sun et al., 2020 studied the effect of rectangular winglet vortex generators in circular tube heat exchanger using some parameter variations. those variations are the amount of vortex generator deployed 4, 6, 8, height ratio 0.05, 0.1, 0.2, and pitch ratio 1.57, 3.14, 4.71. the result shows the increase of nusselt number by 1.15 to 2.23 while the friction factor 1.46 to 11.63 [7]. pourhedajat et al., 2020 numerically studied the use of triangular winglet vortex generators in circular tube. longitudinal and latitudinal variations were used in that study. the result shows the smaller the longitudinal pitch the higher the heat transfer rate. the application of latitudinal pitch variations from 0 mm to 40 mm shows 20 mm latitudinal pitch resulting in the highest heat transfer enhancement [8]. zang et al., 2020 investigated heat transfer performance of rectangular winglet vortex generators applied in circular tube heat exchanger. parallel and v-shaped configuration was used in that study with the variations of attack angle and length ratio. the use of parallel international journal of applied sciences and smart technologies volume 3, issue 2, pages 215–224 p-issn 2655-8564, e-issn 2685-9432 218 configuration results in the increase of heat transfer rate by 54% to 188% and flow resistance by 152% to 568%. the use of v-shaped configuration results in the increase of heat transfer rate by 60% to 118% and flow resistance by 141% to 644% [9]. based on those previous studies that are reviewed above, there are still many improvements that can be done by researchers. this study focus on heat transfer enhancement using a novel type of vortex generators, parallelogram winglet vortex generator. parallelogram winglet vortex generator is a new type of vortex generators that is inspired by combining the characteristics of delta winglet vortex generator and rectangular winglet vortex generator. parallelogram winglet vortex generator has the shape of delta winglet vortex generator that is combined with rectangular vortex generator. heat transfer enhancement of circular tube heat exchanger using parallelogram winglet vortex generators was studied numerically with plain circular tube heat exchanger as the baseline. reynolds number ranging from 6.000 to 10.000 was used in this study to identify the heat transfer performance with the variations of parallelogram vortex generators lean following the flow and lean against the flow configuration. nusselt number, friction factor, velocity vector and temperature distribution were used in this research as evaluation parameters. 2 research methods three dimensional numerical method was carried out in this study to investigate heat transfer performance of circular tube heat exchanger embedded with parallelogram winglet vortex generators. 2.1. model description the simulation was carried out in two different parallelogram winglet vortex generators configurations that are lean following the flow as can be seen at figure 1, and against the flow as can be seen at figure 2. the length of the pipe is 700 mm with vortex generators embedded inside the circular tube. as can be seen at figure 3, the diameter of the tube is 50.8 mm with the thickness of 3 mm. six pairs parallelogram international journal of applied sciences and smart technologies volume 3, issue 2, pages 215–224 p-issn 2655-8564, e-issn 2685-9432 219 winglet vortex generators with the pitch of 100 mm. each pair composed of 4 vortex generators that was arranged at 0°, 90°, 180°, 270° of the cross section area. the attack angle of 0° was applied to the parallelogram winglet vortex generators to investigate the most basic effect of it used in circular tube heat exchanger. the geometry of parallelogram winglet vortex generator is shown in figure 4. the parallelogram winglet vortex generator has the short diagonal of 15 mm and long diagonal 30 mm. figure 1. lean following of the flow configuration. figure 2. lean against of the flow configuration. figure 3. cross section of the physical model. international journal of applied sciences and smart technologies volume 3, issue 2, pages 215–224 p-issn 2655-8564, e-issn 2685-9432 220 figure 4. cross section of the physical model of parallelogram winglet vortex generator 2.2. boundary condition three dimensional numerical method was carried out using reynolds number ranging from 6.000 to 10.000. the working fluid was assumed to be steady turbulent flow. inlet temperature of the working fluid was 322.2 k. the wall and the vortex generator assumed to have the same temperature of 300 k. the working fluid used in this study was ammonia. coupled pressure and velocity in governing equation in this study was solved using the simple algorithm. the thermal convection and the velocity was discretized by second order upwind. the residual criteria less than 10-6 was used for the energy equation and 10-4 for the other variables. 3 results and discussion heat transfer performance of the cases studied was evaluated using nusselt number and thermal gradient. figure 5 shows the nusselt number increases with reynolds number that is similar in the tendency of the study by liu et al [4]. this study has overall nusselt number higher than liu et al [4] study due to the use of ammonia as the working fluid that is lower in thermal conductivity than water used by liu et al study. the nusselt numbers in figure 5 increase with reynolds numbers in all cases studied. in the case of the plain circular tube the increase of nusselt number from 6.000 to 7.000, 8.000, 9.000, and 10.000 are 12.86%, 25.54%, 37.18%, and 47.38%. in the case of vortex generators leaned following the flow configuration the increase of nusselt numbers are 11.34% 22.55% 33.71% and 44.79%. in the case of vortex generators lean against the flow configuration the increase of nusselt numbers are 10.72%, 21.61%, 32.44%, and 43.00%. based on the data showed in this study the plain international journal of applied sciences and smart technologies volume 3, issue 2, pages 215–224 p-issn 2655-8564, e-issn 2685-9432 221 tube has the highest increase in nusselt number with the increase of reynolds number. it is followed by the use of vortex generators leaned following the flow and the lowest nusselt number increase is by the used of vortex generators leaned against the flow. based on the use of vortex generators compare with the plain case, the vortex generators leaned following the flow configuration has the average increase of nusselt number 5.38% with the lowest value 4.39% in the reynolds number 9000 and the highest 7.1 % in reynolds number 6000 and the vortex generators leaned against the flow configuration has the average increase of nusselt number 22,49% with the lowest value 21.04% in the reynolds number 9000 and the highest 25.36% in the reynolds number 6000. the use of vortex generators leaned against the flow configuration has the higher increase of nusselt number compare with vortex generators leaned following the flow configuration with the lowest and highest value in the same reynolds number variations. the highest increase in nusselt number value occurred in the use of reynolds number 6000 was caused by the contact time between the working fluid and the heat transfer surface area of the tube and the vortex generators applied [10]. figure 5. reynolds number vs nusselt number for plain tube, leaned following the flow parallelogram winglet vortex generators arrangement and against the flow parallelogram winglet vortex generators arrangement international journal of applied sciences and smart technologies volume 3, issue 2, pages 215–224 p-issn 2655-8564, e-issn 2685-9432 222 temperature gradient was also used in this study to understand the heat transfer phenomena occurred. figure 6, figure 7, and figure 8 showed the thermal gradient of the plain tube, lean following the flow configuration, and lean against the flow configuration using reynolds number 6000. the temperature gradient of reynolds number 6000 was used to investigate the temperature distribution characteristic due to the highest increase of the nusselt number. the plain case in figure 6 showed that the working fluid temperature has a smooth pattern from the inlet to the outlet. it means that the working fluid has relatively low temperature distribution. the use of vortex generator increases the temperature distribution due to the increase of contact surface between the working fluid and tube wall [liu et al., 2018]. the use of vortex generator in figure 7 and figure 8 showed the better temperature distribution compare with the plain case in figure 6. the high temperature not only occurs in the middle stream but also near the tube wall. however, temperature distribution in the case of leaned against the flow configuration showed by figure 8 is higher than the case of leaned following the flow configuration showed by figure 7. figure 6. temperature gradients of the plain case re 6000 figure 7. temperature gradient of the leaned following the flow configuration re 6000 figure 8. temperature gradient of the leaned against the flow configuration re 6000 international journal of applied sciences and smart technologies volume 3, issue 2, pages 215–224 p-issn 2655-8564, e-issn 2685-9432 223 4 conclusion this study investigates the effect of parallelogram winglet vortex generator inside circular tube heat exchanger. plain circular tube heat exchanger was used in this study as the base case to be compared with the use of vortex generator deployed inside the tube with the configuration of leaned following the flow and leaned against the flow. the conclusion of the study based on the results acquired are: a. the use of parallelogram winglet vortex generator can increase heat transfer performance of plain tube heat exchanger using ammonia as the working fluid. b. the use of against the flow configuration resulting in the increase of nusselt number higher than the leaned following the flow configuration. c. the use of leaned following the flow configuration can increase the nusselt number ranging from 4.39% to 7.10% while the use of leaned against the flow configuration 21.04% to 25.36%. references [1] y. xu, m. d. islam, and n. kharoua, “numerical study of winglets vortex generator effects on thermal performance in a circular pipe,” international journal of thermal sciences, 112, 304–317, 2017. [2] p. w. deshmukh and r. p. vedula, “heat transfer and friction factor characteristics of turbulent flow through a circular tube fitted with vortex generator inserts,” international journal of heat and mass transfer, 79, 551–560, 2014. [3] y. xu, m.d. islam, and n. kharoua, experimental study of thermal performance and flow behaviour with winglet vortex generators in a circular tube, applied thermal engineering, 135, 257–268, 2018. [4] h. liu, h. li, y. he, and z. chen, “heat transfer and flow characteristics in a circular tube fitted with rectangular winglet vortex generators,” international journal of heat and mass transfer, 126, 989–1006, 2018. [5] y. lei, f. zheng, c. song, and y. lyu, “improving the thermal hydraulic performance of a circular tube by using punched delta-winglet vortex international journal of applied sciences and smart technologies volume 3, issue 2, pages 215–224 p-issn 2655-8564, e-issn 2685-9432 224 generators,” international journal of heat and mass transfer, 111, 299–311, 2017. [6] c. zhai, m. d. islam, r. simmons, and i. barsoum, “heat transfer augmentation in a circular tube with delta winglet vortex generator pairs,” international journal of thermal sciences, 140 , 480–490, 2019. [7] z. sun, k. zhang, w. li, q. chen, and n. zheng, “investigations of the turbulent thermal-hydraulic performance in circular heat exchanger tubes with multiple rectangular winglet vortex generators,” applied thermal engineering, 168, 114838, 2020. [8] s. pourhedayat, s. m. pesteei, h. e. ghalinghie, m. hashemian, and m. a. ashraf, “thermal-exergetic behavior of triangular vortex generators through the cylindrical tubes,” international journal of heat and mass transfer, 151, 119406, 2020. [9] j. f. zhang, l. jia, w. w. yang, j. taler, and p. oclon, “numerical analysis and parametric optimization on flow and heat transfer of a microchannel with longitudinal vortex generators,” international journal of thermal sciences, 141, 211–221, 2019. [10] y. zhang, c. kang, h. zhao, and s. teng, “effects of in-line configuration of drag-type hydrokinetic rotors on inter-rotor flow pattern and rotor performance,” energy conversion and management, 196, 44–55, 2019. international journal of applied sciences and smart technologies international journal of applied sciences and smart technologies volume 3, issue 2, pages 203–214 p-issn 2655-8564, e-issn 2685-9432 203 the effect of water impact on the refrigerant pipeline between compressor and condensor on cop and efficiency of cooling machine wibowo kusbandono 1,* 1 department of mechanical engineering, faculty of science and technology, sanata dharma university, yogyakarta, indonesia * corresponding author: bowo@usd.ac.id (received 04-11-2021; revised 10-11-2021; accepted 10-11-2021) abstract the purpose of this research is (a) to design and assemble a steam compression cycle cooling machine using the main components on the market (b) to obtain the characteristics of the cooling engine, which includes the coefficient of performance (cop) and the efficiency of the cooling engine. the research was conducted experimentally in the laboratory. the refrigeration machine works by using a steam compression cycle, with the main components: a compressor, an evaporator, a capillary tube and a condenser. the compressor power is 1/6 pk, while the other main components are adjusted to the size of the compressor power. the refrigerant used is r134a. variations of the research were carried out on the condition of the refrigerant pipe located between the compressor and condenser: (a) without being submerged in water (b) submerged in 0.50 liters of water and (c) submerged in 0.75 liters of water. the results of the study provide information that the water immersion in the refrigerant pipe which is located between the compressor and condenser affects the cop value and the efficiency of the refrigeration machine. consecutively (1) international journal of applied sciences and smart technologies volume 3, issue 2, pages 203–214 p-issn 2655-8564, e-issn 2685-9432 204 without being submerged in water, the cop value is 2.45 and the efficiency is 0.64 (2) submerged in liter of water, the cop value is 2.41 and the efficiency is 0.62 (3) submerged in liter of water, the value cop is 2.34 and efficiency is 0.60. keywords: cop, efficiency, cooling engine, steam compression cycle, submerged in water 1 introduction a two-door refrigerator is one example of a cooling machine commonly used by housewives. a two-door refrigerator is a cooling machine that can be used to cool a variety of foodstuffs and various kinds of beverages and can be used to freeze food and water. the working temperature of the two-door refrigerator cooling machine is generally below . the working temperature of the evaporator ranges from until . in a two-door refrigerator, there are 2 rooms that have different functions, one room has a very low temperature, which functions to freeze food ingredients (freezer room), and the other room serves to cool or lower the temperature of various foodstuffs and beverages, without freezing. in a two-door refrigerator, the process of freezing and cooling food ingredients is carried out by cold air circulated by a fan, passing through every food ingredient in the refrigerator. the circulated air is of very low temperature, which is obtained when air is passed through the evaporator. this air temperature is close to the working temperature of the evaporator. after passing through the evaporator, the air is flowed into the freezer chamber and then only flown into the cooling chamber. from the cooling chamber the air is recirculated through the evaporator and sent to the freezer again. thus the air circulation takes place continuously. when the air passes through the evaporator, the water content in the air will be frozen and will become ice flowers that will stick to the evaporator. the air can contain moisture, because the refrigerator door is often opened. as time goes by, the ice in the evaporator is getting more and more or thicker so that it will interfere with the performance of the two-door refrigerator cooling machine. international journal of applied sciences and smart technologies volume 3, issue 2, pages 203–214 p-issn 2655-8564, e-issn 2685-9432 205 therefore, these frosts must be removed, and immediately removed from the evaporator so that the performance of the refrigeration machine remains high. in general, frost is removed by heating. after a certain period of time (about 7 hours), the electric heater attached to the evaporator will work and melt the ice flowers attached to the evaporator pipes into water. the water produced will fall and fall into the water reservoir, which is located at the bottom of the cooling machine. as time goes by, over time there will be more and more water in the reservoir and it will fill the water reservoir. this water reservoir is useless, and the water must be discarded, otherwise it will spill. for some of the newer two-door refrigerators, the water stored in the water reservoir is removed in a way that does not bother the user. you do this by evaporating water in a water reservoir with heat energy. one way is, the water in the water reservoir is immersed in the high-temperature refrigerant pipeline, which is part of the cooling machine. the part that is immersed in the water is the refrigerant pipe which is located between the compressor and the condenser. with this method, the stored water can evaporate completely, so that refrigerator users are not bothered to dispose of stored water. another way is, the water reservoir is heated by touching the surface of the compressor casing. the heat received by the reservoir is then used to estimate the water. how to estimate the water in this reservoir is very interesting for the author to find out how the effect of the volume of soaking water on the characteristics of the refrigeration machine, when the refrigerant pipeline located between the compressor and the condenser is submerged in water. how is the effect of the immersion water on the cop and efficiency of the cooling machine? in this research, we refer to the references [1], [2], [3], [4], and [5]. 2 research methodology researched object the object under study is a refrigeration machine that works with a vapor compression cycle, with a hermetic compressor power of 1/6 hp, other main components such as: finned tube evaporator, pipe condenser with reinforcing radius and international journal of applied sciences and smart technologies volume 3, issue 2, pages 203–214 p-issn 2655-8564, e-issn 2685-9432 206 capillary tube, the size of which adjusts to the magnitude of the compressor power. all components are standard components obtained in the market. a schematic drawing of the tested cooling engine is presented in figure 1. figure 1. schematic of the cooling machine under study. international journal of applied sciences and smart technologies volume 3, issue 2, pages 203–214 p-issn 2655-8564, e-issn 2685-9432 207 research flow the research was conducted experimentally. the research flow is presented in figure 2. figure 2. research flow chart research variation the research was conducted by varying the volume of water used to immerse the refrigerant pipe located between the compressor and condenser (a) volume of water: 0 ml or without immersion (b) volume of water: 0.5 liters and (c) volume of water 0.75 liters. variations are made to see the and efficiency of the cooling machine. international journal of applied sciences and smart technologies volume 3, issue 2, pages 203–214 p-issn 2655-8564, e-issn 2685-9432 208 3 results and discussion the results of the research and the results of data processing required to determine the characteristics of the cooling machine are presented in table 1 to table 3. the data presented is the result of recording data at 180 minutes, after the cooling machine is turned on/run. with the reason, that at that minute, the work of the machine can be considered "calm". in the discussion of the results of this study, to facilitate data processing, the process of further cooling and further heating on the compression cycle is ignored. this is due to the difficulty of obtaining refrigerant properties such as the entropy and enthalpy of the refrigerant at high heat conditions. it is actually possible in other ways to obtain data on hot conditions (hot gas conditions) and further cold based on the p-h r134a diagram, but researchers have difficulty getting accurate and definite data that can be accounted for, because it is only based on estimates. from the research that has been done, the cooling machine that is designed and assembled is able to work well as desired. able to work continuously without jamming / dead machines or overheating. from the research data, the machine is able to provide a relatively very low working temperature of the evaporator, below . likewise with the working temperature of the condenser, it can work at temperatures above the surrounding ambient temperature (above ). table 1. working pressure and temperature of the evaporator and condenser condition of the pipe between compressor and condenser evaporator working pressure condenser working pressure evaporator working temperature condenser working temperature (psia) (kpa) (psia) (kpa) not submerged in water 16,83 116 155,52 1072 -23 250 42 315 submerged in water ½ 14,65 101 143,57 989,6 -26 247 39 312 submerged in 13,63 94 135,97 937,2 -27,5 245,5 37,2 310,2 international journal of applied sciences and smart technologies volume 3, issue 2, pages 203–214 p-issn 2655-8564, e-issn 2685-9432 209 water table 2. enthalpy values in the vapor compression cycle for each study variation condition of the pipe between compressor and condenser (kpa) (kpa) (kj/kg) (kj/kg) (kj/kg) (kj/kg) not submerged in water 116 1072 384 437,3 259,4 259,4 submerged in water ½ 101 989,6 382,82 436,0 254,9 254,9 submerged in water ¾ 94 937,2 381,90 435,1 251,9 251,9 table 3. calculation results of cooling engine characteristics condition of the pipe between compressor and condenser (kj/kg) (kj/kg) (kj/kg) not submerged in water 124,60 177,90 53,30 2,34 submerged in water ½ liter 127,92 181,10 53,18 2,41 submerged in water ¾ liter 130,00 183,10 53,10 2,45 condition of the pipe between compressor and condenser refrigerant mass flow rate (kg/s) not submerged in water 3,91 0,60 0,002807 submerged in water ½ liter 3,86 0,62 0,002813 submerged in water ¾ liter 3,85 0,64 0,002817 the research data in table 1 shows that the condition of the pipe (which is located between the compressor and the condenser) is immersed in water, providing different international journal of applied sciences and smart technologies volume 3, issue 2, pages 203–214 p-issn 2655-8564, e-issn 2685-9432 210 research data than if the pipe is not immersed in water. this means that the water immersion in the pipe affects the working conditions of the cooling machine or in other words affects the characteristics of the cooling machine. it also appears that the volume of water used to submerge the pipe also affects the research data, this also means that the volume of water affects the characteristics of the cooling machine. if the pipe is not immersed in water, the working temperature of the condenser gives the highest value compared to other conditions (table 1 and table 2). if the pipe is immersed in water, the working temperature of the condenser decreases. when the volume of the immersion water is increased or increased in amount, the working temperature of the condenser decreases. this means that the working temperature of the condenser is influenced by the type of fluid and the amount of fluid around the condenser. when the fluid is replaced, the working temperature of the condenser changes and adapts to the new environment, until a new working temperature of the condenser is obtained. for the liquid fluid environment (in this case water) it provides a lower working temperature of the condenser compared to the gas (air) environment. this may be due to: (a) the specific heat of water is greater than the specific heat of air. the greater the specific heat of the fluid around the condenser, the more difficult it is to change the temperature of the fluid around the condenser. the nature of this fluid seems to make the working temperature of the condenser more attracted to the fluid temperature (b) the value of the convection heat transfer coefficient when the heat transfer process takes place between the condenser and water is greater than when it is with air. the greater the value of the convection heat transfer coefficient, the lower the working temperature of the condenser. the results show that the heat released by the cooling engine ( ) tends to be more when the condenser's working temperature is lower. this is probably because the value of the heat transfer coefficient of water is greater than that of air. the greater the value of the convection heat transfer coefficient, the greater the heat released by the cooling engine to the water. the existence of a pipe that is immersed in water causes the ability of the cooling machine to release heat to be greater. the largest amount of heat released by the cooling engine is when the pipe is submerged in ¾ liter of water (table 3). international journal of applied sciences and smart technologies volume 3, issue 2, pages 203–214 p-issn 2655-8564, e-issn 2685-9432 211 the results showed that when the pipe was immersed in water, the working temperature of the evaporator changed (table 1). move towards lower temperatures. as it is known that the working temperature of the condenser is directly proportional to the working pressure of the condenser. when the working temperature of the condenser decreases, the working pressure of the condenser also decreases. for the same refrigeration machine, the compressor's ability to suppress refrigerant is certainly not much different. both when pressing refrigerant from the evaporator to the condenser and when from the condenser to the evaporator when passing through the capillary tube. so when the working pressure of the condenser drops, the pressure of the evaporator also tends to fall. because the pressure difference between the condenser pressure and the evaporator pressure tends to be constant. so it is understandable if the decrease in working pressure in the condenser causes the working pressure of the evaporator to also decrease. the decreasing working pressure of the evaporator causes the working temperature of the evaporator to decrease as well. the results showed that when the pipe leading to the condenser was immersed in water, the heat absorbed by the evaporator increased (table 3). this is understandable because the working temperature of the evaporator is lower. the lower working temperature of the evaporator will cause the ability of the evaporator to absorb heat to be greater because the temperature difference between the working temperature of the evaporator and the temperature of the fluid around it is getting bigger. the largest heat can be absorbed by the evaporator, when the immersed pipe goes to the condenser with a volume of ¾ liter of water. the decrease in the working pressure and temperature of the evaporator, in fact has an impact on the work of the compressor in flowing refrigerant (table 3). the results showed that the work done by the compressor to move refrigerant per unit mass tends to decrease. this provides information that the compressor work tends to "feel" lighter when the working pressure of the evaporator decreases. compressor work is most “feeling” light when it is located between the compressor and condenser immersed with a volume of liter of water. thus, the important information obtained from this research is, when the pipe between the compressor and the condenser is immersed in water, the pressure and international journal of applied sciences and smart technologies volume 3, issue 2, pages 203–214 p-issn 2655-8564, e-issn 2685-9432 212 working temperature of the condenser decrease. the decreasing pressure and temperature of the condenser also has an impact on the decrease of working pressure and working temperature of the evaporator. the working pressure decreases in the evaporator, causing the compressor work to tend to be lighter. when the pipe leading to the condenser is immersed in water, the working conditions change, with a new equilibrium condition which is the simultaneous result of the work of each component of the refrigeration engine that works with the vapor compression cycle. the calculation of the actual value is based on the comparison between the calculation of the heat absorbed by the evaporator and the compressor power used in the cooling machine . if the value tends to increase while tends to decrease, then the resulting cop value tends to increase. table 3 shows the values for each variation. the highest actual co value is when the pipe is immersed in water with a volume of liter, of 2.45, followed by the condition of the pipe being immersed in water with a volume of liter and without being immersed in water. the order of this values is the opposite of the value. if the order of values is increasing, then the order of values tends to decrease. the value of the cooling engine efficiency is based on the comparison between the value and the value. the results of this study show that the value tends to increase while the value tends to decrease. because efficiency is a comparison between and , the efficiency value will tend to increase. the highest efficiency value of the cooling machine is 0.64, produced when the pipe is immersed in water with a volume of liter. the lowest efficiency value when the pipe is not immersed in water. 4 conclusions the results of the study provide the following results: a. the designed cooling machine can work well and smoothly as desired. the working temperature of the evaporator is able to reach temperatures below , and the working temperature of the condenser is able to reach temperatures above the temperature of the fluid in its environment, above . international journal of applied sciences and smart technologies volume 3, issue 2, pages 203–214 p-issn 2655-8564, e-issn 2685-9432 213 b. the value of the refrigeration machine from the largest to the lowest, respectively, is owned when the pipe located between the compressor and condenser is immersed with full volume of water, full volume of water immersed, and without water immersion, of 2.45, 2. 41 and 2.34. c. the efficiency values of the refrigeration engine from the largest to the lowest are respectively owned when the pipe located between the compressor and the condenser is immersed with full volume of water, full volume of water immersed, and without water immersion, of 0.64, 0.62 and 0.60. references [1] k. anwar, e. arif, dan w. h. piarah, “effects of capillary pipe temperature on cooling engine performance.” mechanical journal, 1 (januari 1), 30 – 3, 2010. [2] matheus m dwinanto, hari rarindo and jonri lomi ga, “the effect of the dimensions of the capillary tube and the mass of the refrigerant used on the performance of a double evaporator refrigeration machine for fish preservation,” proceedings of the annual national seminar on mechanical engineering & thermofluid iv, 1 (1), 2012. [3] said h. i., abbas, lita a.latif, “experimental study of cooling engine performance in the mechanical engineering laboratory, khairun university, ternate,” proceedings annual national seminar on mechanical engineering & thermofluid iv, 1 (1), 2012. [4] soegeng witjahjo, “performance test of refrigeration machines using lpg refrigerant,” austenite journal, 1 (2), 2009. [5] wibowo kusbandono, p.k. purwadi, “the effect of the existence of fan in the wood drying room on the drying time and the performance of the electric energy wood dryer,” international journal of applied sciences and smart technologies (ijasst), 3 (1), 2021. international journal of applied sciences and smart technologies volume 3, issue 2, pages 203–214 p-issn 2655-8564, e-issn 2685-9432 214 this page intentionally left blank international journal of applied sciences and smart technologies international journal of applied sciences and smart technologies volume 3, issue 2, pages 161–170 p-issn 2655-8564, e-issn 2685-9432 161 study of nickel extraction process from spent catalysts with hydrochloric acid solution: effect of temperature and kinetics study kevin cleary wanta 1,* , ivanna crecentia narulita simanungkalit 1 , elsha pamida bahri 1 , ratna frida susanti 1 , gelar panji gemilar 2 , widi astuti 3 , himawan tri bayu murti petrus 4 1 department of chemical engineering, parahyangan catholic university, jl. ciumbuleuit 94, bandung 40141, indonesia 2 pt petrokimia gresik, jl. jenderal ahmad yani, gresik 61119, indonesia 3 research unit for mineral technology, indonesian institute of sciences (lipi), jl. ir. sutami km. 15, tanjung bintang 35361, indonesia 4 department of chemical engineering, universitas gadjah mada, jl. grafika 2, kampus ugm, yogyakarta, 55281, indonesia * corresponding author: kcwanta@unpar.ac.id (received 20-08-2021; revised 19-09-2021; accepted 20-09-2021) abstract as one of the hazardous and toxic solid wastes, spent catalysts need to be treated before the waste is discharged into the environment. one of the substances that need to be removed from the spent catalysts is the heavy metal ions and/or compounds contained therein. the method that can be applied is the extraction method using an acid solvent. in this study, the extraction process was carried out on spent catalysts samples from pt. petrokimia gresik. the focus of the study is on nickel extraction by varying the temperature in the range of 30–85 o c. a 1 m hydrochloric acid (hcl) solution was used as a solvent while the extraction process was 120 minutes. the experimental results show that the maximum nickel recovery of 14.70% international journal of applied sciences and smart technologies volume 3, issue 2, pages 161–170 p-issn 2655-8564, e-issn 2685-9432 162 can be achieved at a temperature of 85 o c. kinetic studies were carried out using two kinetic models. the results of both models evaluation on the research data show that the lump model gives better results than the shrinking core model. the average error percentage of the lump model is smaller than the shrinking core model. it indicates that the extraction process was controlled by the diffusion step through the ash layer in the solid and chemical reactions simultaneously. keywords: extraction, lump model, nickel, shrinking core model 1 introduction various chemical industries, such as the oil, fertilizer, and petrochemical industries require catalysts to increase the rate of chemical reactions. the catalysts used can be solid catalysts containing different metal contents, such as nickel (ni), iron (fe), cobalt (co), vanadium (v), molybdenum (mo), and various other heavy metals [1]. the use of catalysts in the long term will make the catalyst saturated and no longer adequate for use. thus, these catalysts will be replaced and disposed of as a used catalyst or what is usually called spent catalysts. spent catalysts can not be disposed of directly into the environment because these catalysts are classified as hazardous solid waste. therefore, this waste needs to be treated first with the aim of taking hazardous compounds, such as heavy metals contained in it. one of the spent catalysts solid waste treatment that can be done is extracting ions or compounds contained in the catalysts. this method is usually referred to as the leaching method. this method is a commonly used method and has been done by several researchers before. this extraction process requires a solvent to react and dissolve the metal ions and/or compounds. the solvents usually used are acidic solvents, both strong acids and weak acids [1], [2], [3], [4]. in this study, the extraction process was carried out using a hydrochloric acid solution. furthermore, this research also studies parameters that have a significant impact on the extraction process, such as temperature. temperature is an important parameter in the extraction process because temperature affects the rate of molecular international journal of applied sciences and smart technologies volume 3, issue 2, pages 161–170 p-issn 2655-8564, e-issn 2685-9432 163 diffusion and the rate of chemical reactions. furthermore, by studying the effect of temperature in the extraction process, the kinetic study of this process can be investigated by utilizing the existing kinetic models, such as the shrinking core model and the lump model. by studying the kinetics of the extraction process, a proper extractor can be designed. a proper extractor design must follow the applicable mechanism of the extraction process. thus, the results of this study are expected to provide the appropriate information for the extractor design process. 2 research methodology materials. the main raw material of this research is the spent catalyst from pt. petrokimia gresik. these catalysts have a nickel content of 16.7% wt, where the largest nickel phase in these catalysts is nickel elements (metals) and nickel oxide compounds (nio). in addition, another main material used is a 1 m hydrochloric acid (hcl) solution. this hcl solution acts as a solvent in this extraction process. equipment. the main equipment used for the extraction process is a series of equipment consisting of a three–neck flask (as an extractor), stirrer and motor, condenser, water bath and thermostat (to maintain a constant operating temperature), and a thermometer. as a sample analysis instrument, the instrument used is atomic absorption spectroscopy (aas). research procedure. 180 ml of 1 m hydrochloric acid solution was put into a three– neck flask. after the equipments were assembled, the solution was heated to the desired temperature. in this study, the operating temperature was varied at 30, 60, and 85 o c. after the temperature was reached, 36 grams of the spent catalyst solids (<74 microns) were put into the extractor. this solids intake will be counted as t = 0. the extraction process lasted for 120 minutes where during the operation time, the sampling was carried out periodically at 5, 10, 15, 30, 60, and 120 minutes. the samples that had been taken would be separated first between the solid and the liquid phase. this separation process was carried out using a centrifuge which was operated at 1,000 rpm for 10 international journal of applied sciences and smart technologies volume 3, issue 2, pages 161–170 p-issn 2655-8564, e-issn 2685-9432 164 minutes. the supernatant formed was then analyzed for the nickel content in the solution using atomic absorption spectroscopy (aas) instruments. data analysis. the data obtained from the analysis using the aas instrument is processed to obtain the result in the form of nickel recovery percentage where the equation used to calculate the value is: (1) where x is the percentage of nickel recovery, cni is the concentration of nickel extracted during the extraction process in ppm, cni,tot is the total nickel concentration extracted from raw materials in ppm. furthermore, the nickel recovery data will be used to study the kinetics of the extraction process. there are two kinetic models applied in this study, namely the shrinking core model and the lump model. for the shrinking core model, the equations used are [5], [6], [7]: diffusion in liquid film layer controlling : (2) diffusion in ash layer controlling : – – – (3) chemical reaction controlling : – – (4) where x' is the nickel recovery fraction, kf, kd, and kr are the rate constants for the extraction process, and t is the operating time. for the lump model, the equations used are [8]: [ – { – ⁄ – } – ⁄ ] (5) where is a constant related to the rate at the chemical reaction step while is a constant related to the rate at the diffusion step in the ash layer. determination of the suitable mathematical model is done by calculating the percentage error of research data and simulation data. the equation used is as follows: | – | (6) international journal of applied sciences and smart technologies volume 3, issue 2, pages 161–170 p-issn 2655-8564, e-issn 2685-9432 165 where %e is the percentage of error, is the percentage of nickel recovery from experimental data, and is the percentage of nickel recovery from the simulation results of mathematical models. 3 results and discussion effect of temperature on nickel recovery. temperature is a very important parameter and has a significant influence on determining the rate of the extraction process. in this study, the temperature used is in the range of 30–85 o c. the experimental results are presented in figure 1. figure 1. effect of temperature on nickel recovery figure 1 shows that the higher the temperature used, the more nickel can be obtained. in this study, the highest nickel recovery was obtained during the process at a temperature of 85 o c for 120 minutes, where the percentage of nickel recovery was 14.70%. in general, the nickel recovery that occurs during the process can reach 1.44 (for a temperature of 60 o c) and 2.25 times (for a temperature of 85 o c) compared to nickel recovery at a temperature of 30 o c. this phenomenon can occur because an international journal of applied sciences and smart technologies volume 3, issue 2, pages 161–170 p-issn 2655-8564, e-issn 2685-9432 166 increase in temperature will enhance the kinetic energy of each molecule in the system. consequently, each molecule will collide more often so that chemical reactions will also take place more quickly. in addition, an increase in temperature will also enhance the rate of diffusion, both molecular diffusion in the liquid film layer and diffusion in the ash layer in the solid. kinetics study using the shrinking core model. the first kinetic model to be evaluated against the experimental data above is the shrinking core model. this model is the model most widely used by previous researchers in the hydrometallurgy or metal extraction process. the evaluation of this model is carried out using mathematical equations (2–4) and the evaluation results obtained are presented in figure 2. figure 2. simulation results of the shrinking core model when (a) diffusion in the liquid film layer controlling; (b) diffusion in the ash layer controlling; (c) chemical reactions controlling international journal of applied sciences and smart technologies volume 3, issue 2, pages 161–170 p-issn 2655-8564, e-issn 2685-9432 167 figure 2 shows the simulation results of the experimental data. the simulation results show that the shrinking core model in which the diffusion step in the ash layer controlling is the best model to illustrate the overall mechanism during the extraction process. this can be concluded because the r 2 value obtained in this model is better than the other two models (diffusion step in the film layer and chemical reactions that control the process). a good r 2 value is an r 2 value that is close to 1. the evaluation results obtained indicate that the diffusion step through the ash layer in the solid is the step with the slowest rate. in solids, there are pathways used for each molecule (both reactant and product molecules) to diffuse. this pathway has a small size so that the reactant molecules that diffuse from the surface of the liquid to the unreacted surface in the solid will interfere with each other with the product molecules that diffuse from the surface that has reacted in the solid to the liquid body. it causes diffusion through the ash layer in the solid to be the step that controls the extraction process. thus, the total rate of nickel extraction from the spent catalyst is determined by the rate of diffusion in the solid (the slowest rate). another parameter commonly used to evaluate the kinetics of a process is the activation energy. activation energy is the minimum energy required for a reaction to occur. the value of this parameter can be found using the arrhenius equation as follows [9]: (– ) (7) – (8) where a is the collision frequency, ea is the activation energy, r is the gas constant, and t is the absolute temperature. in this extraction process, the value of the collision frequency (a) obtained is 0.1384, while the activation energy value is 22.65 kj/mol. according to havlík, if the activation energy value is in the range of 20–35 kj/mol, the extraction process is controlled by the diffusion and chemical reactions simultaneously [10]. therefore, this kinetic study will be continued by using the lump model, which combines the two stages to prepare the mathematical model. international journal of applied sciences and smart technologies volume 3, issue 2, pages 161–170 p-issn 2655-8564, e-issn 2685-9432 168 kinetics study using the lump model. one of the weaknesses of the shrinking core model is that mathematical problem compiled in the model only assume one step that controls the extraction process; other steps are ignored because those steps are considered to have a very fast rate. in fact, in the solid–liquid extraction process, there are five steps involved in the system. for some cases, the use of assumptions as mentioned above does not represent the actual mechanism that occurs during the extraction process. as a result, the designed extractor will not be suitable for this process. therefore, in this study, to complete the kinetic study, the lump kinetic model is evaluated against the experimental data where the equation used for the simulation process follows equation (5). the simulation results of the lump model are then compared with the evaluation results of the shrinking core model, where the diffusion step in the ash layer controls the extraction process. the comparison of the two kinetic models is presented in table 1. table 1. comparison of the simulation results of the shrinking core model (diffusion step in the ash layer) with the lump model time, minutes nickel recovery – experimental, % nickel recovery scm * , % nickel recovery – lump model, % error scm * , % error – lump, % 30⁰c 60⁰c 85⁰c 30⁰c 60⁰c 85⁰c 30⁰c 60⁰c 85⁰c 30⁰c 60⁰c 85⁰c 30⁰c 60⁰c 85⁰c 0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 5 1.92 3.02 3.66 0.74 1.17 2.35 1.41 1.89 3.04 61.35 61.28 35.67 26.56 37.42 16.94 10 2.41 3.16 4.32 1.29 1.96 3.49 1.99 2.67 4.28 46.36 37.90 19.23 17.43 15.51 0.93 15 3.13 3.68 5.46 1.75 2.60 4.51 2.43 3.27 5.23 44.07 29.27 17.44 22.36 11.14 4.21 30 3.61 4.13 6.88 2.84 4.09 6.83 3.43 4.60 7.34 21.32 0.91 0.78 4.99 11.38 6.69 60 4.39 5.65 9.73 4.44 6.24 10.10 4.83 6.46 10.27 1.08 10.36 3.80 10.02 14.34 5.55 120 6.51 9.34 14.70 6.73 9.27 14.67 6.78 9.05 14.30 3.33 0.76 0.23 4.15 3.10 2.72 the average error percentage per temperature: 25.36 20.07 11.02 12.22 13.27 5.29 the average error percentage per kinetics model: 18.82 10.26 * scm: the shrinking core model with a diffusion step in the ash layer controls the process. based on table 1, the lump kinetics model provides better evaluation results than the shrinking core model. this can be concluded from the average error percentage for both international journal of applied sciences and smart technologies volume 3, issue 2, pages 161–170 p-issn 2655-8564, e-issn 2685-9432 169 models, namely 10.26% for the lump model and 18.82% for the shrinking core model. these results further corroborate the previous information that the spent catalyst extraction process using 1 m hydrochloric acid solution is controlled by the diffusion step through the ash layer and chemical reactions simultaneously. 4 conclusion based on the experimental dan simulation results, temperature significantly affects the nickel extraction process from spent catalysts with hydrochloric acid as solvent. at a temperature of 85 oc, the nickel recovery can reach 14.70% after the process lasts for 120 minutes. from the research data, the mechanism of the extraction process was studied and it was found that the rate of the extraction process was determined by the diffusion step through the ash layer in the solid and the chemical reaction step. both of these stages occur simultaneously. the use of the lump model proves the conclusions obtained. based on the average error percentage, the model gives a smaller error value than the shrinking core model. the average error percentage for the lump model is 10.26%. acknowledgements the authors thank the institute for research and community service of parahyangan catholic university, which has supported this research financially. in addition, the authors also thank pt. petrokimia gresik, which has provided the main raw materials for this research. references [1] m. marafi dan a. stanislaus, “waste catalyst utilization: extraction of valuable metals from spent hydroprocessing catalysts by ultrasonic–assisted leaching with acids.” industrial & engineering chemistry research, 50, 9495–9501, 2011. [2] p.k. parhi, k.h. park, g. senanayake, “a kinetic study on hydrochloric acid leaching of nickel from ni–al2o3 spent catalyst.” journal of industrial and engineering chemistry, 19, 589–594, 2013. international journal of applied sciences and smart technologies volume 3, issue 2, pages 161–170 p-issn 2655-8564, e-issn 2685-9432 170 [3] j. ramos–cano, g. gonzález–zamarripa, f.e. carrillo–pedroza, m. de j.soria– aguilar, a. hurtado–marcías, a. cano–vielma, “kinetics and statistical analysis of nickel leaching from spent catalyst in nitric acid solution.” international journal of mineral processing, 148, 41–47, 2016. [4] w. mulak, b. miazga, a. szymczycha, “kinetics of nickel leaching from spent catalyst in sulphuric acid solution.” int. j. miner. process, 77, 231–235 2005. [5] o. levenspiel, chemical reaction engineering. 3rd edition, new york: john wiley & sons, inc., 1999. [6] o.s. ayanda, f.a. adekola, a.a. baba, o.s. fatoki, b.j. ximba, “comparative study of the kinetics of dissolution of laterite in some acidic media.” journal of minerals & materials characterization & engineering, 10 (15), 1457–1472, 2011. [7] k.c. wanta, f.h. tanujaya, r.f. susanti, h.t.b.m. petrus, i. perdana, w. astuti, “studi kinetika proses atmospheric pressure acid leaching bijih laterit limonit menggunakan larutan asam nitrat konsentrasi rendah.” jurnal rekayasa proses 12 (2), 77–84, 2018. [8] k.c. wanta., w. astuti, i. perdana, h.t.b.m. petrus, “kinetic study in atmospheric pressure organic acid leaching: shrinking core model versus lump model.” minerals, 10, 1–10, 2020. [9] h.s. fogler, element of chemical reaction engineering. 4th edition, new jersey: prentice education, inc., 2006. [10] t. havlík, hydrometallurgy. cambridge: woodhead publishing limited, 2006. international journal of applied sciences and smart technologies volume 4, issue 1, pages 35-46 p-issn 2655-8564, e-issn 2685-9432 35 determining the coefficient of restitution through the “bouncing ball” experiment using phyphox jesi pebralia1, * 1department of physics, universitas jambi, jambi, indonesia *corresponding author: jesipebralia@unja.ac.id (received 13-04-2022; revised 26-04-2022; accepted 26-04-2022) abstract this study aims to determine the restitution coefficient based on the reflected sound from the “bouncing ball” experiment. the experiment used a phyphoxbased smartphone. the produced sound came from a reflection between marble and the floor. theoretically, the value of the coefficient of restitution is obtained based on the square root of the final height of the object’s reflection divided by its initial height. in this study, the determination of the height of the bounce from the “bouncing ball” was measured using the phyphox application, which was analyzed based on the sound of the bouncing ball and the time interval of the reflection. the results show that the value of the coefficient of restitution for each marble were 0.93, 0.92, and 0.92, while the average error were 0.65%, 0.85%, and 1.43%, respectively. furthermore, the average error value of the overall measurement is 0.97%. this error is highly dependent on the shape of the object. the rounder a thing is, the higher the level of accuracy will be. in this study, the determination of the coefficient of restitution was carried out in two ways: by comparing the height of the ball’s bounce and the time intervals for the n and n+1 bounce. the value of the coefficient of restitution generated by these methods was identic. thus, this work is licensed under a creative commons attribution 4.0 international license international journal of applied sciences and smart technologies volume 4, issue 1, pages 35-46 p-issn 2655-8564, e-issn 2685-9432 36 this study had confirmed that the bounce ball experiment using the phyphox indicated valid data well so that it could be implemented for determining the coefficient of restitution. keywords: bouncing ball, coecficient of restitution, phyphox, smartphone 1 introduction the coefficient of restitution is a value that states the level of elasticity of objects in the collision phenomenon. particularly, the coefficient of restitution is a characterization of the degrees of freedom in the inelastic collision and dimensionless [1]. the value of the coefficient of restitution depends on the ratio of the final height and initial height of the collision particles, which is mathematically expressed by the equation (1): 𝑒 = √ ℎ2 ℎ1 . (1) determining the value of the coefficient of restitution is very useful for developing various sub-fields of physics. the coefficient of restitution has become an essential part of granular hydrodynamics and the kinetic theory of gas [2], [3], computation of granular matter [4], and even in agriculture, especially for the development of agricultural techniques [5]. the coefficient of restitution provides information on the energy lost during the collision process [6]. it could be necessary for dry granular modelling and multi-phase flow models. research in determining the value of the coefficient of restitution has been carried out using different techniques. these are determining the coefficient of restitution using a robot and piezoelectric sensor [7], determining the coefficient of restitution using a highspeed camera [6], [8], determining the coefficient of restitution using the double pendulum method [9], determining the coefficient of restitution using high-speed video [10], and others [11]–[14]. one of the experiments that can be used to determine the value of the coefficient of restitution is the “bouncing ball” experiment [15]. a bouncing ball is a bounce event from a ball dropped without initial velocity from a certain height above the earth’s surface and hits a particular surface. in the bouncing ball phenomenon, an inelastic collision occurs where the ball will bounce up and until the ball stops at a specific time. the process of international journal of applied sciences and smart technologies volume 4, issue 1, pages 35-46 p-issn 2655-8564, e-issn 2685-9432 37 the bouncing ball illustrates many aspects that could be observed from the principles of mechanics, including the phenomenon of collisions when the ball hits the floor surface [16]. according to the studies that have been carried out, the technique for determining the value of the coefficient of restitution is expensive, complex, and challenging to carry out independently by students. in this study, a cheap and practical technique for determining the coefficient of restitution will be introduced using a smartphone. generally, using advanced technology in this era, smartphones have been equipped with sophisticated sensors that can support the implementation of science practicums, especially physics. in addition, this is also supported by the existence of practical support applications that can be downloaded and run freely on smartphones. one application that can be used for physics experiments is the phyphox. several studies using the phyphox application include research on determining spring constants on spring oscillation events [17], free-fall motion experiments using the stopwatch acoustic feature [18], pendulum motion experiments [19], and others [20], [21]. the value of the coefficient of restitution can be determined by using the phyphox application, by finding the ratio of the object’s speed between two adjacent bounce [22]. in this study, we provide a method to determine the value of the coefficient of restitution of bouncing ball by using phyphox based on two approaches. the first approach is through the ratio of the height of marbles in two adjacent bounce. while the second approach is through the ratio of time intervals between two adjacent bounce. 2 research methodology in this study, the value of the coefficient of restitution between the marble and the floor would be calculated through the bouncing ball experiment. based on equation (1), the value of the restitution coefficient could be determined if the initial height and final height of the following bounce process were known. the object used in this study were three marbles with different diameters. the purpose is to evaluate the effect of the size of marble. international journal of applied sciences and smart technologies volume 4, issue 1, pages 35-46 p-issn 2655-8564, e-issn 2685-9432 38 the experiment design is illustrated in figure 1. the first process was setting the initial height of the marbles using a ruler. it was 15 cm from the floor. the smartphone was placed on the floor in a position close to the bounce of the marbles. to find the value of the bounce marbles’ height, a smartphone was installed with the phyphox. after that, the ball was dropped without initial velocity and allowed to bounce. the sound produced by the marble’s bounce would be detected and recorded by the smartphone sensor. then it would be processed and converted to generate data on interval time, height, and energy of the bounce. furthermore, the data would be displayed on the smartphone’s lcd. the number of bounces produced in this experiment was five times. figure 1. experiment design in determining the coefficient of restitution in the next stage, the accuracy of the obtained data needs to be declared because it is related to the error value of a measurement. the smaller the measurement error value, the greater the level of research accuracy. so, it might be stated that the data experiment was valid. the measurement error value is calculated through the equation (2), 𝑒𝑟𝑟𝑜𝑟 = | 𝑚𝑠𝑡𝑎𝑛𝑑𝑎𝑟𝑑−𝑚𝑒𝑥𝑝𝑒𝑟𝑖𝑚𝑒𝑛𝑡 𝑚𝑠𝑡𝑎𝑛𝑑𝑎𝑟𝑑 | × 100%, (2) where 𝑚𝑠𝑡𝑎𝑛𝑑𝑎𝑟𝑑 represents the actual value measured through standard measuring instruments and 𝑚𝑒𝑥𝑝𝑒𝑟𝑖𝑚𝑒𝑛𝑡 represents the value displayed by the smartphone. international journal of applied sciences and smart technologies volume 4, issue 1, pages 35-46 p-issn 2655-8564, e-issn 2685-9432 39 the error value was attained from the initial height measurement read by the smartphone compared to the actual height. in this study, the arisen error should be under 2%. therefore, if the error value is less than 2%, the data collection process could be continued, while others would repeat the experiment process. figure 2. flowchart of detecting the coefficient of restitution by using phyphox after determining the error value, the data generated by the smartphone would be analyzed to determine the coefficient of restitution and standard deviations. the calculation of the standard deviation value follows the equation (3), 𝐷𝑠 = 1 𝑛 √ 𝑛 ∑𝑛𝑖=1 𝑥𝑖 2−(∑𝑛𝑖=1 𝑥𝑖) 2 (𝑛−1 , (3) where n represents the number of measurements and 𝑥𝑖 represents the measurement results in the 𝑖𝑡ℎ experiment. international journal of applied sciences and smart technologies volume 4, issue 1, pages 35-46 p-issn 2655-8564, e-issn 2685-9432 40 3 results and discussion in this study, three types of marbles with different diameters were used in the experiment. they were labelled with marble 1, having a diameter of 14.1 mm, marble 2, having a diameter of 15.2 mm, and marble 3, having a diameter of 27.4 mm. to obtain the valid data, the experiment was done and repeated five times. the experiment results are shown in figure 3. (a) (b) (c) figure 3. the experiment results of the coefficient of restitution for (a) marble 1, (b) marble 2, and (c) marble 3 figure 3 shows the bounce ball experiment results using marble with various diameters. the initial height of the marble is set at 15 cm from the floor. in this experiment, the international journal of applied sciences and smart technologies volume 4, issue 1, pages 35-46 p-issn 2655-8564, e-issn 2685-9432 41 coefficient of restitution for marble 1 was 0.90 to 0.95, while for marble 2, the coefficient of restitution was 0.90 to 0.94, as well as the coefficient of restitution for marble 3 was in the range of 0.84 to 0.95. the dash line in each figure indicate the average value of the coefficient of restitution. the average value of the three marbles respectively are 0.9282, 0.9247, and 0.9237. based on equation (2), average error value of the three marbles respectively are 0.65%, 0.85%, and 1.43%, respectively. moreover, the average coefficient of restitution for collisions in this experiment was displayed in table 1. analytical calculation the coefficient of restitution is a significant empirical parameter in any physical modelling where there is energy loss caused by particle collisions [23]. one of the essential factors that influence the factor determining the value of the restitution coefficient is the velocity value immediately after the 𝑛𝑡ℎ reflection [24], 𝑣𝑛 = 𝑣0𝑒 𝑛, (4) where 𝑣0 is the velocity of the ball just before the collision. the time interval between adjacent collisions (𝑛𝑡ℎ to (𝑛 + 1)𝑡ℎ) is expressed by the equation (5), 𝑇𝑛 = 2𝑣𝑛 𝑔 𝑇𝑛 = 2𝑣0𝑒 𝑛 𝑔 𝑇𝑛 = 𝑇0𝑒 𝑛. (5) where g is the gravitational acceleration and 𝑇0 = 2𝑣0/𝑔. then, from the equation 5, it could be obtained that the coefficient of restitution could also be determined through the time interval of the bouncing ball, 𝑒 𝑛 = 𝑇𝑛 𝑇0 . (6) time interval (𝑇𝑛) of bouncing marble was show in figure 4. international journal of applied sciences and smart technologies volume 4, issue 1, pages 35-46 p-issn 2655-8564, e-issn 2685-9432 42 figure 4. the time interval vs n-bounce for each marble from figure 4, it is obtained that for marble 1, when 𝑛 = 1, then the value of ℎ = 0,1497 𝑚, dan 𝑇𝑛 = 0,322𝑠. so, substituting those data to 𝑇0, it could be determined the coefficient of restitution, 𝑇0 = (2𝑣0/𝑔) 𝑇0 = (2√2𝑔ℎ/𝑔) 𝑇0 = (√ 8ℎ 𝑔 ) 𝑇0 = (√ 8(0,1497𝑚) 9,8 𝑚/𝑠2 ) 𝑇0 = 0,3496𝑠, (7) thus, for 𝑛 = 1 the coefficient of restitution of marble 1 is 𝑒1 = 0,322𝑠 0,3496 𝑠 𝑒1 ≈ 0,92. (8) the coefficient of restitution of marble 2 dan marble 3 could be found by applying the same method. those are 𝑒2 ≈ 0,93 and 𝑒3 ≈ 0,92. international journal of applied sciences and smart technologies volume 4, issue 1, pages 35-46 p-issn 2655-8564, e-issn 2685-9432 43 the last step in this study was comparing values of the coefficient of restitution. the purpose was to see the validity of the data between the experimental and analytical methods. the data is shown in table 1. table 1. comparison of the coefficient of restitution between analytical and experimental method marble coefficient of restitution experimental analytical d1 = 14,1 mm 𝑒 ≈ 0,93 𝑒 ≈ 0,92 d2 = 15,2 mm 𝑒 ≈ 0,92 𝑒 ≈ 0,93 d3 = 27,4 mm 𝑒 ≈ 0,92 𝑒 ≈ 0,92 table 1 shows that the coefficient of restitution between the experiment and analytical approach is not the same but very identic. many factors could cause this. the measurement of the restitution coefficient value is highly dependent on the shape and material of the object, the level of surface roughness of the reflection, the level of sphericity of the thing, and the measurement error. however, the small measurement error may cause a shift in energy to the translational or rotational components [23]. furthermore, since there was no change in the coefficient of restitution for three kinds of marble, it could be stated that there is no effect from the diameter of the marble to determine the coefficient of restitution. thus, this study confirmed that bounce ball experiment using the phyphox indicates valid data well so that it could be implemented to determine the coefficient of restitution. 4 conclusion this research has succeeded in determining the coefficient of restitution. the method was cheap and practical through the phenomenon of bouncing balls and smartphones integrated with the phyphox application. the value of the coefficient of restitution for each marble was 0.93, 0.92, and 0.92, while the average error was 0.65%, 0.85%, and 1.43%, respectively. moreover, the average error value of the overall measurement is 0.97%. this error is highly dependent on the shape of the object. the rounder an object is, the higher the level of accuracy will be. in this study, the determination of the coefficient of restitution was carried out in two ways: by comparing the height of the international journal of applied sciences and smart technologies volume 4, issue 1, pages 35-46 p-issn 2655-8564, e-issn 2685-9432 44 ball’s bounce and the time intervals for the n and n+1 bounce. the value of the coefficient of restitution generated by these methods was identic. thus, this study had confirmed that the bouncing ball experiment using the phyphox indicated valid data well so that it could be implemented for determining the coefficient of restitution references [1] m. heckel, a. glielmo, n. gunkelmann, and t. pöschel, “can we obtain the coefficient of restitution from the sound of a bouncing ball?,” phys. rev. e, 93(3), 1–10, 2016. [2] d. serero, n. gunkelmann, and t. pöschel, “hydrodynamics of binary mixtures of granular gases with stochastic coefficient of restitution,” j. fluid mech., 781, 595– 621, 2015. [3] t. pöschel, n. v. brilliantov, and t. schwager, “long-time behavior of granular gases with impact-velocity dependent coefficient of restitution,” phys. a stat. mech. its appl., 325(1–2), 274–283, 2003. [4] t. schwager and t. pöschel, “coefficient of restitution and linear-dashpot model revisited,” granul. matter, 9(6), 465–469, 2007. [5] b. feng, w. sun, l. shi, b. sun, t. zhang, and j. wu, “determination of restitution coefficient of potato tubers collision in harvest and analysis of its influence factors,” nongye gongcheng xuebao/transactions chinese soc. agric. eng., 33(13), 50–57, 2017. [6] m. c. marinack, r. e. musgrave, and c. f. higgs, “experimental investigations on the coefficient of restitution of single particles,” tribol. trans., 56(4), 572–580, 2013. [7] m. montaine, m. heckel, c. kruelle, t. schwager, and t. pöschel, “coefficient of restitution as a fluctuating quantity,” phys. rev. e stat. nonlinear, soft matter phys., 84(4), 3–7, 2011. [8] b. crüger et al., “coefficient of restitution for particles impacting on wet surfaces: an improved experimental approach,” particuology, 25, 1–9, 2016. international journal of applied sciences and smart technologies volume 4, issue 1, pages 35-46 p-issn 2655-8564, e-issn 2685-9432 45 [9] j. hlosta, d. žurovec, j. rozbroj, á. ramírez-gómez, j. nečas, and j. zegzulka, “experimental determination of particle–particle restitution coefficient via double pendulum method,” chem. eng. res. des., 135, 222–233, 2018. [10] d. b. hastie, “experimental measurement of the coefficient of restitution of irregular shaped particles impacting on horizontal surfaces,” chem. eng. sci., 101, 828–836, 2013. [11] x. li, m. dong, d. jiang, s. li, and y. shang, the effect of surface roughness on normal restitution coefficient, adhesion force and friction coefficient of the particlewall collision, 362, elsevier b.v, 2020. [12] s. singh, d. tafti, and v. tech, “gt2013-95623 predicting the coefficient of restitution for particle wall,” 1–9, 2013. [13] z. jiang, j. du, c. rieck, a. bück, and e. tsotsas, “ptv experiments and dem simulations of the coefficient of restitution for irregular particles impacting on horizontal substrates,” powder technol., 360, 352–365, 2020. [14] h. tang, r. song, y. dong, and x. song, “measurement of restitution and friction coefficients for granular particles and discrete element simulation for the tests of glass beads,” materials (basel)., 12(19), 2019. [15] p. müller, m. heckel, a. sack, and t. pöschel, “complex velocity dependence of the coefficient of restitution of a bouncing ball,” phys. rev. lett., 110(25), 1–5, 2013. [16] r. cross, “behaviour of a bouncing ball,” phys. educ., 50(3), 335–341, 2015. [17] h. a. ewar, m. e. bahagia, v. jeluna, r. b. astro, and a. nasar, “penentuan konstanta pegas menggunakan aplikasi phyphox pada peristiwa osilasi pegas,” j. kumparan fis., 4(3), 155–162, 2021. [18] i. boimau, a. y. boimau, and w. liu, “eksperimen gerak jatuh bebas berbasis smartphone menggunakan aplikasi phyphox infianto,” in seminar nasional ilmu fisika dan terapannya, 67–75, 2021. [19] j. pebralia and i. amri, “eksperimen gerak pendulum menggunakan smartphone berbasis phyphox: penerapan praktikum fisika dasar selama masa covid-19,” jifp (jurnal ilmu fis. dan pembelajarannya), 5(2), 10–14, 2021. international journal of applied sciences and smart technologies volume 4, issue 1, pages 35-46 p-issn 2655-8564, e-issn 2685-9432 46 [20] s. yasaroh, h. kuswanto, d. ramadhanti, a. azalia, and h. hestiana, “utilization of the phyphox application (physical phone experiment) to calculate the moment of inertia of hollow cylinders,” j. ilm. pendidik. fis. al-biruni, 10(2), 231–240, 2021. [21] y. f. ilmi, a. b. susila, and b. h. iswanto, “using accelerometer smartphone sensor and phyphyox for friction experiment in high school,” j. phys. conf. ser., 2019(1), 2021. [22] d. dahnuss, p. marwoto, r. s. iswari, and p. listiaji, “marbles and smartphone on physics laboratory: an investigation for finding coefficient of restitution,” j. phys. conf. ser., 1918(2), 2021. [23] j. e. higham, p. shepley, and m. shahnam, “measuring the coefficient of restitution for all six degrees of freedom,” granul. matter, 21(2), 2019. [24] c. e. aguiar and f. laudares, “listening to the coefficient of restitution and the gravitational acceleration of a bouncing ball,” am. j. phys., 71(5), 499–501, 2003. international journal of applied sciences and smart technologies international journal of applied sciences and smart technologies volume 3, issue 1, pages 111–124 p-issn 2655-8564, e-issn 2685-9432 111 obtaining the efficiency and effectiveness of fin in unsteady state conditions using explicit finite difference method petrus kanisius purwadi1,*, budi setyahandana1, r.b.p. harsilo1 1department of mechanical engineering, faculty of science and technology, sanata dharma university, yogyakarta, indonesia *corresponding author: pkpurwadi1966@gmail.com (received 10-04-2021; revised 10-05-2021; accepted 17-05-2021) abstract this paper discusses the search for fin efficiency and effectiveness in unsteady state conditions using numerical computation methods. the straight fin under review has a cross-sectional area that changes with the position x. the cross section of the fin is rectangular. the fins are composed of two different metal materials. the computation method used is the explicit finite difference methods. the properties of the fin material are assumed to be fixed, or do not change with changes in temperature. when the stability requirements are met, the use of the explicit finite difference methods yields satisfactory results. the use of the explicit finite difference methods can be developed for various other fin shapes, which are composed of two or more different materials, time-varying convection heat transfer coefficient, and the properties of the fin material that change with temperature. keywords: fin, efficiency, effectiveness, finite-difference, unsteady state international journal of applied sciences and smart technologies volume 3, issue 1, pages 111–124 p-issn 2655-8564, e-issn 2685-9432 112 1 introduction in the design of fins, the important thing to know is the efficiency and effectiveness of the fins. there are many ways to know the value of fin efficiency and fin effectiveness. for certain fin shapes, the value of the efficiency and effectiveness of the fins can be found by using existing charts. for fins that do not yet exist, other ways are needed to get them. for cases in unstable state, the solution becomes more complicated. one way that can be done is by using numerical computation methods. several articles [1-7] related to the efficiency and fin effectiveness of fins in the unsteady state have helped in in solving this problem. in this case, the case discussed is a fin which is composed of two different materials and is in an unsteady state. the shape and cross-section of the fins are different from those that have been used as the object of the previous discussion. in this case, the cross-sectional area of the fin changes with the change in position x. the shape of the fin chosen is a truncated rectangular pyramid. figure 1 presents an image of the fin shape to be discussed. with the total length of the fin 𝐿, where 𝐿 = 𝐿1 + 𝐿2 and 𝐿1 = 𝐿2 figure 1. straight fin with a truncated rectangular pyramid shape the mathematical model for this problem is expressed by equation (1): 𝜕2𝑇(𝑥, 𝑡) 𝜕𝑥2 + ℎ𝑃 𝑘𝐴𝑝 (𝑇(𝑥, 𝑡) − 𝑇∞) = 1 ∝ 𝜕𝑇(𝑥, 𝑡) 𝜕𝑡 0 < 𝑥 < 𝐿, 𝑥 ≠ 𝐿1, 𝑡 > 0 (1) 𝛼 = 𝑘 𝜌𝑐 (2) with initial condition: international journal of applied sciences and smart technologies volume 3, issue 1, pages 111–124 p-issn 2655-8564, e-issn 2685-9432 113 𝑇(𝑥, 0) = 𝑇𝑖 = 𝑇𝑏 0 ≤ 𝑥 ≤ 𝐿, 𝑡 = 0 (3) and boundary condition: 𝑇(0, 𝑡) = 𝑇𝑏 𝑥 = 0, 𝑡 > 0 (4) 𝑘2 𝜕𝑇(𝑥, 𝑡) 𝜕𝑥 = ℎ 𝐴𝑠 𝐴𝑝1 (𝑇(𝑥, 𝑡) − 𝑇∞) 𝑥 = 𝐿, 𝑡 > 0 (5) 𝑘1 𝜕𝑇(𝑥, 𝑡) 𝜕𝑥 = 𝑘2 𝐴𝑝2 𝐴𝑝1 𝜕𝑇(𝑥, 𝑡) 𝜕𝑥 + ℎ𝐴𝑠 𝐴𝑠 𝐴𝑝1 (𝑇∞ − 𝑇(𝑥, 𝑡)) 𝑥 = 𝐿1, 𝑡 > 0 (6) in equations (1)-(6) we have used the notations: x : stated the position on fin, m t : stated the time, seconds t(x,t) : temperature at the x position, at the t time, oc t∞ : fluid temperature, oc. ti : initial temperature of the fin, oc tb : temperature at the base of the fin, oc l1 : the length of the fin with material 1or the length of the fin with material 1, m l2 : the length of the fin with material 1or the length of the fin with material 2, m l : the length of the fin, l = l1+l2, m ap : cross-sectional of the fin, m 2 p : around the cross section, m k : conduction heat transfer coefficient of fin material 1 or material 2, w/(m.oc) k1 : conduction heat transfer coefficient of fin material 1, w/(m. oc) k2 : conduction heat transfer coefficient of fin material 2, w/(m. oc) h : convection heat transfer coefficient, w/(m2 oc) ρ : density of the material, kg/m3 c : specific heat of the material, kj/(kg.oc) c1 : specific heat of the material 1, j/(kg. oc) c2 : specific heat of the material 2, j/(kg. oc) α : thermal diffusivity of the material, m2/s 1.1. calculation steps the steps used to calculate the efficiency and effectiveness of the fin in an unsteady state condition using the numerical method are as follows: 1. determine fixed variable variables, such as: ℎ, 𝜌1, 𝜌2, 𝑐1, 𝑐2, 𝑘1, 𝑘2, 𝑎1, 𝑏1 𝑎2, 𝑏2, 𝑇∞, 𝑇𝑖, 𝑇𝑏, 𝐿1, 𝐿2, 𝐿, 𝑃, 𝐴𝑠𝑖, 𝐴𝑝𝑖 , 𝑚, ∆𝑥, ∆𝑡. international journal of applied sciences and smart technologies volume 3, issue 1, pages 111–124 p-issn 2655-8564, e-issn 2685-9432 114 2. perform temperature calculations for each control volume from time to time from volume control 1 to control up to volume control to m. volume control 1 is on the bottom of the fin and volume control m is on the end of the fin. the total volume of the control is m. 3. calculate the actual heat flow rate (𝑞𝑎𝑐𝑡) released by the fins, the ideal heat flow rate released by the fins (𝑞𝑖𝑑𝑒𝑎𝑙 ), and the heat flow rate released if there are no fins (𝑞𝑛𝑜,𝑓𝑖𝑛). temperature calculations are carried out from time to time. 4. perform calculations of fin efficiency (η) from time to time 5. perform calculations of fin effectiveness (ϵ) from time to time figure 2 presents a flow chart to get the efficiency and effectiveness of the fins. figure 2. flow chart to get the efficiency and effectiveness of the fins some of the assumptions used in this calculation are: 1. the fluid temperature and the convection heat transfer coefficient are uniform and constant. 2. material properties (density, specific heat, thermal conductivity of the material) are uniform and constant. 3. the shape of the fins is fixed and does not change in volume during the process. international journal of applied sciences and smart technologies volume 3, issue 1, pages 111–124 p-issn 2655-8564, e-issn 2685-9432 115 4. the transfer of heat by radiation is negligible. 5. the connection on the two fin materials is assumed to be perfectly connected. 6. no energy generated in the fins. 7. the heat transfer by conduction in the fins is assumed to take place in only one direction, the x direction. 8. the entire surface of the fins is in contact with the fluid around the fins. 1.2 control volume and energy balance on the control volume figure 3 shows a fin image divided into many controls. the total control volume is m. volume control 1 is at the base of the fin, and volume control 𝑚 is at the end of the fin. the volume control 𝑝 is located at the border of the two fin materials. in the control volume 𝑝, the half control volume is made of material 1, and the half control volume is made of material 2. the numbering of the control volume starts from the bottom of the fin to the end of the fin in sequence, with the names of the volume controls 1, 2, 3, … , 𝑚. the distance between control volumes is ∆𝑥. at the control volume 𝑖 = 1, 2, 3, … , 𝑚 − 1, the control volume performs a convection heat transfer process through its blanket surface. at the control volume 𝑚, the process of convection heat transfer outside the blanket surface, also carries out the process of convection heat transfer through the cross-sectional area of the fins. the thickness of the volume control at the base of the fin and at the tip of the fin is 0,5 ∆𝑥. the control volume thickness of 𝑖 = 1, 2, 3, … , 𝑚 − 1 is ∆𝑥. in figure 4, figure 5 and figure 6, the convection heat flow is represented by 𝑞𝑐. the conduction heat flow from the control volume in position 𝑖 − 1 to the control volume at 𝑖, is represented by 𝑞𝑖−1. the conduction heat flow from the control volume in position 𝑖 + 1 to the control volume in position 𝑖, expressed as 𝑞𝑖+1. international journal of applied sciences and smart technologies volume 3, issue 1, pages 111–124 p-issn 2655-8564, e-issn 2685-9432 116 figure 3. distribution of volume control on the fins the energy balance of the control volume on the fins, without energy generation, in the unsteady state condition can be stated by the following statement: [ all energy entering the control volume during the time interval ∆t ] = [ the energy change in the control volume during in the time interval ∆t ] it can also be expressed by equation (7): ∑ 𝑞𝑖 𝑖=𝑘 𝑖=1 = 𝜌𝑐𝑉 ∆𝑇 ∆𝑡 (7) figure 4 presents the energy balance at the control volume inside the fin body, or at the control volume at position 𝑖 = 2, 3, 4, … , 𝑝 − 3, 𝑝 − 2, 𝑝 − 1 (between the base of the fin with the border of the two fin materials) and in position 𝑖 = 𝑝 + 1, 𝑝 + 2, 𝑝 + 3, … , 𝑚 − 2, 𝑚 − 1 (between the border of the two fin materials with the tip of the fin). the energy balance in this control volume can be expressed by equation (8): 𝑞𝑖−1 + 𝑞𝑖+1 + 𝑞𝑐 = 𝜌𝑐𝑉 𝑇𝑖 𝑛+1−𝑇𝑖 𝑛 ∆𝑡 . (8) international journal of applied sciences and smart technologies volume 3, issue 1, pages 111–124 p-issn 2655-8564, e-issn 2685-9432 117 figure 4. the energy balance at the control volume at 𝑖 = 2, 3, 4, … , 𝑝 − 2, 𝑝 − 1 and at the control volume at 𝑖 = 𝑝 + 1, 𝑝 + 2, 𝑝 + 3, … , 𝑚 − 2, 𝑚 − 1 in equations (8)-(10), 𝑇𝑖 𝑛 is the temperature in control volume 𝑖 at time 𝑡 = 𝑛 or at time 𝑡, and 𝑇𝑖 𝑛+1 is at control volume 𝑖 at time 𝑡 = 𝑛 + 1 or when 𝑡 = 𝑡 + ∆𝑡. assume that the control volume has a uniform temperature. the variable 𝑉 represents the volume, ∆𝑡 and represents the time interval. figure 5. energy balance on volume control at 𝑖 = 𝑝 international journal of applied sciences and smart technologies volume 3, issue 1, pages 111–124 p-issn 2655-8564, e-issn 2685-9432 118 figure 5 presents the energy balance at the control volume at the boundary of the two materials or at 𝑖 = 𝑝. at the control volume at 𝑖 = 𝑝, the control volume is composed of two different materials. the thickness of the control volume for material 1 is 0.5 ∆𝑥, the thickness of the control volume for material 2 is 0.5 ∆𝑥. the energy balance at the control volume in position 𝑖 = 𝑝 can be expressed by equation (9): 𝑞𝑖−1 + 𝑞𝑖+1 + 𝑞𝑐 = 𝜌1𝑐1𝑉𝑖. 𝑇𝑖 𝑛+1−𝑇𝑖 𝑛 ∆𝑡 + 𝜌2𝑐2𝑉2 𝑇𝑖 𝑛+1−𝑇𝑖 𝑛 ∆𝑡 . (9) figure 6. energy balance on the control volume di 𝑖 = 𝑚 figure 6 presents the energy balance at the control volume at the tip of the fin or at 𝑖 = 𝑚. in the control volume at the fin tip, the process of convection heat transfer through the blanket surface and the cross-sectional surface of the fin tip. the thickness of the volume control at the tip of the fin is 0.5 ∆𝑥 . the energy balance at the control volume at position 𝑖 = 𝑚 can be expressed by equation (10): 𝑞𝑖−1 + 𝑞𝑐1 + 𝑞𝑐2 = 𝜌2𝑐2𝑉2 𝑇𝑖 𝑛+1 − 𝑇𝑖 𝑛 ∆𝑡 (10) 1.3 temperature distribution at the fin by using equation (7), the equation used to calculate the temperature at the control volume 𝑖 = 2, 3, 4, … , 𝑚 − 2, 𝑚 − 1, 𝑚, when 𝑡 > 0 can be found. the equation for calculating the temperature at the control volume at 𝑖 = 2, 3, 4, … , 𝑝 − 2, 𝑝 − 1, when 𝑡 > 0 is expressed by equation (11): international journal of applied sciences and smart technologies volume 3, issue 1, pages 111–124 p-issn 2655-8564, e-issn 2685-9432 119 𝑇𝑖 𝑛+1 = ∆𝑡 ∝1 ∆𝑥 2 [(𝑇𝑖−1 𝑛 − 2𝑇𝑖 𝑛 + 𝑇𝑖+1 𝑛 ) + 𝐵𝑖1(𝑇∞ − 𝑇𝑖 𝑛)] + 𝑇𝑖 𝑛 . (11) the stability requirements for equation (11) are expressed by equation (12): ∆𝑡 ≤ 𝜌1𝑐1∆𝑥 2 𝑘1 (2 + 𝐵𝑖1 ∆𝑥𝐴𝑠𝑖 𝐴𝑝 ) . (12) the equation for calculating the temperature at the control volume at 𝑖 = 𝑝 when 𝑡 > 0, can be expressed by equation (13): 𝑇𝑖 𝑛+1 = ∆𝑡 (𝜌1𝑐1𝑉1 + 𝜌2𝑐2𝑉2) [(𝑘1𝐴𝑝 𝑇𝑖−1 𝑛 − 𝑇𝑖 𝑛 ∆𝑥 + 𝑘2𝐴𝑝 𝑇𝑖+1 𝑛 − 𝑇𝑖 𝑛 ∆𝑥 ) + ℎ𝐴𝑠(𝑇∞ − 𝑇𝑖 𝑛)] + 𝑇𝑖 𝑛 . (13) the stability requirements for equation (13) are expressed by equation (14): ∆𝑡 ≤ (𝜌1𝑐1𝑉1 + 𝜌2𝑐2𝑉2) ( 𝑘1𝐴𝑝 ∆𝑥 + 𝑘2𝐴𝑝 ∆𝑥 + ℎ𝐴𝑠) . (14) the equation for calculating the temperature at the control volume at 𝑖 = 𝑝 when 𝑡 > 0, can be expressed by equation (15): 𝑇𝑖 𝑛+1 = ∆𝑡 ∝2 ∆𝑥 2 [(𝑇𝑖−1 𝑛 − 2𝑇𝑖 𝑛 + 𝑇𝑖+1 𝑛 ) + 𝐵𝑖2(𝑇∞ − 𝑇𝑖 𝑛)] + 𝑇𝑖 𝑛 . (15) the stability requirements for equation (15) are expressed by equation (16): ∆𝑡 ≤ 𝜌2𝑐2∆𝑥 2 𝑘2 (2 + 𝐵𝑖2 ∆𝑥𝐴𝑠𝑖 𝐴𝑝 ) . (16) the equation for calculating the temperature at the control volume at 𝑖 = 𝑚 when 𝑡 > 0, can be expressed by equation (15): 𝑇𝑖 𝑛+1 = ∆𝑡 0.5∝2∆𝑥 2 [(𝑇𝑖−1 𝑛 − 𝑇𝑖 𝑛) + 𝐵𝑖2(𝑇∞ − 𝑇𝑖 𝑛) + (𝐵𝑖2 𝐴𝑠 𝐴𝑝 (𝑇∞ − 𝑇𝑖 𝑛))] + 𝑇𝑖 𝑛 . (17) the stability requirements for equation (17) are expressed by equation (18): ∆𝑡 ≤ 0.5 𝜌2𝑐2∆𝑥 2 𝑘2 (1 + 𝐵𝑖2 + 𝐵𝑖2 𝐴𝑠 𝐴𝑝 ) . (18) international journal of applied sciences and smart technologies volume 3, issue 1, pages 111–124 p-issn 2655-8564, e-issn 2685-9432 120 1.4. heat flow rate, efficiency and effectiveness the amount of actual heat released by the fins in the unsteady state condition can be calculated by equation (19): 𝑞act 𝑛 = ∑ ℎ𝐴𝑠𝑖(𝑇𝑖 𝑛 − 𝑇∞) 𝑚 𝑖=1 (19) the amount of ideal heat released by the fins can be calculated by equation (20): 𝑞ideal = ∑ ℎ𝐴𝑠𝑖 (𝑇𝑏 − 𝑇∞) 𝑚 𝑖=1 . (20) the amount of heat released by the bottom of the fin, if the length of the fin is considered zero, can be calculated by equation (21): 𝑞no fin = ∑ ℎ𝐴𝑏(𝑇𝑏 − 𝑇∞) 𝑚 𝑖=1 . (21) the efficiency of the fin at an unsteady state condition can be calculated by equation (22): 𝜂𝑛 = 𝑞act 𝑛 𝑞ideal = ∑ ℎ𝐴𝑠𝑖(𝑇𝑖 𝑛 − 𝑇∞) 𝑚 𝑖=1 ∑ ℎ𝐴𝑠𝑖 (𝑇𝑏 − 𝑇∞) 𝑚 𝑖=1 . (22) the effectiveness of the fin at an unsteady state condition can be calculated by equation (23): ∈𝑛= 𝑞act 𝑛 𝑞no fin = ∑ ℎ𝐴𝑠𝑖(𝑇𝑖 𝑛 − 𝑇∞) 𝑚 𝑖=1 ∑ ℎ𝐴𝑏(𝑇𝑏 − 𝑇∞) 𝑚 𝑖=1 . (23) 2 research methodology the search for efficiency and effectiveness is carried out using numerical methods. the numerical method used is the explicit finite difference methods. the selected fin shape is a truncated rectangular pyramid (figure 1). fins are composed of two different materials. the properties of the fin material are presented in table 1. the total fin length is 𝐿, where 𝐿 = 10 𝑐𝑚. the length of the fin with material 1 is the same as length p with material 2, where 𝐿1 = 𝐿2 = 5 𝑐𝑚. the section of the fin at the base of the fin, has 𝑎1 width and 𝑏1 height, where 𝑎1 = 1 𝑐𝑚 and 𝑏1 = 0.5 𝑐𝑚. the section of the fin at the end of the fin, has 𝑎2 width and 𝑏2 height, where 𝑎2 = 0.5 𝑐𝑚 and 𝑏2 = 0.25 𝑐𝑚. the sum of the control volume on the fin is 𝑚, where 𝑚 = 25. the distance international journal of applied sciences and smart technologies volume 3, issue 1, pages 111–124 p-issn 2655-8564, e-issn 2685-9432 121 between the control volume is ∆𝑥, and ∆𝑥 = 0.004167 𝑐𝑚. the time interval is ∆𝑡, where ∆𝑡 = 0.05 second. the temperature of the fluid around the fin is 𝑇∞, where 𝑇∞ = 30℃ the base temperature of the fin remains equal to 𝑇𝑏, with 𝑇𝑏 = 100℃. the initial temperature of the fin is ti, where 𝑇𝑖 = 𝑇𝑏 = 100℃. the convection heat transfer coefficient is ℎ, where ℎ = 100𝑊/𝑚2. ℃. table 1. properties of the material (y.a. cengel, heat transfer a practical approach, pp 868-870, see [8]) material density (ρ) (kg/m3) thermal conductivity (k) (w/m.oc) specific heat (c) (j/kg.oc) copper (cu) 8933 401 385 aluminum (al) 2702 237 903 zinc (zn) 7140 116 389 nickel (ni) 8900 90.7 444 iron (fe) 7870 80.2 447 3 results and discussion the results of the calculation of temperature distribution, actual heat flow rate, efficiency and effectiveness of fins are presented in figure 7, figure 8, figure 9 and figure 10. the solution using explicit finite difference methods gives satisfactory results. as long as the stability requirements are met, the results are realistic. the truncated rectangular pyramid shape is an example of the shape of the fin. thus, in the same way, the use of the explicit finite difference methods can be developed for the calculation of efficiency and effectiveness with other fin shapes. figure 7 presents the temperature distribution that occurs in the fins with various variations in the composition of the material. at the boundary of the two materials, the temperature distribution looks broken. the fins with an iron-copper composition were more likely to be broken than those with an iron-nickel composition. the results of the temperature distribution in the unsteady state are influenced by the properties of the material such as mass density, thermal conductivity and specific heat. figure 8, figure 9 and figure 10 present the actual heat flow rate of fin removal, fin efficiency and fin effectiveness. the results of this calculation, it all depends on the temperature distribution that occurs in the fin. international journal of applied sciences and smart technologies volume 3, issue 1, pages 111–124 p-issn 2655-8564, e-issn 2685-9432 122 figure 7. the temperature of the fin at time t= 100 seconds figure 8. the heat flow rate of the fin figure 9. fin efficiency figure 10. fin effectiveness the use of the explicit finite different methods can also be developed for the case of conduction heat flow which is not only for the one-dimensional case, but also the two dimensional unsteady state case. the two-dimensional case is when the conduction heat 20 40 60 80 100 1 7 13 19 25 t e m p e ra tu re , o c control volume fe-cu fe-al fe-zn fe-ni 4 7 10 13 16 0 25 50 75 100 h e a t fl o w r a te , w a tt time t, second fe-cu fe-al fe-zn fe-ni 0 0,25 0,5 0,75 1 0 25 50 75 100 f in e ff ic ie n c y time t, second fe-cu fe-al fe-zn fe-ni 0 15 30 45 60 0 25 50 75 100 f in e ff e c ti v e n e ss time t, second fe-cu fe-al fe-zn fe-ni international journal of applied sciences and smart technologies volume 3, issue 1, pages 111–124 p-issn 2655-8564, e-issn 2685-9432 123 flow in the object takes place in two directions: the x direction and the y direction. the two-dimensional case occurs for fins with a greater length and width than the thickness of the fin. it can also be developed for the three-dimensional case, with three-directions of conduction heat flow: x, y, and z directions. the fin object discussed in this issue is for fins which are composed of two different materials. if the fins are composed of three or more different fin materials, the use of the explicit finite difference methods can also be used. in this case, the fin material used does not change with changes in temperature. the use the explicit finite difference methods can also be used for fins with temperature changing material properties. material properties that change due to temperature, such as density, specific heat and thermal conductivity of the fin material. 4 conclusion as long as the stability requirements are met explicit finite difference methods can be used well to calculate the efficiency and effectiveness of fins in the unsteady state condition. the use of the explicit finite difference methods can be developed for various other fin shapes, which are composed of two or more different materials, the timechanging convection heat transfer coefficient value, and the temperature-changing properties of the fin material. acknowledgements this research was conducted at sanata dharma university. the authors thank sanata dharma university for supporting this research. references [1] a.w. vidjabhakti, p.k. purwadi, s. mungkasi, “efficiency and effectiveness of a fin having pentagonal cross section dependent on the one dimensional position”, proceedings of the 1st international conference on science and technology for an internet of things, yogyakarta, indonesia, 20 october 2018, doi:10.4108/eai.19-102018.2282540. https://eudl.eu/proceedings/icsti/2018 https://eudl.eu/proceedings/icsti/2018 international journal of applied sciences and smart technologies volume 3, issue 1, pages 111–124 p-issn 2655-8564, e-issn 2685-9432 124 [2] k.s. ginting, p.k, purwadi, s. mungkasi, “efficiency and effectiveness of a fin having capsule shaped cross section dependent on the one dimensional position”, proceedings of the 1st international conference on science and technology for an internet of things, yogyakarta, indonesia, 20 october 2018, doi:10.4108/eai.19-102018.2282543. [3] p.k. purwadi and bramantyo yudha pratama, “efficiency and effectiveness of a truncated cone-shaped fin consisting of two different materials in the steady-state”, aip conference proceedings 2202, 020091 (2019), doi:10.1063/1.5141704. [4] p.k. purwadi and michael seen, “efficiency and effectiveness of a fin having the capsule-shaped cross section in the unsteady state”, cite as: aip conference proceedings 2202, 020092 (2019), doi:10.1063/1.5141705. [5] p.k. purwadi and michael seen, “the efficiency and effectiveness of fins made from two different materials in unsteady-state”, journal of physics: conference series, volume 1511, international conference on science education and technology (icoseth) 2019, 23 november 2019, surakarta, indonesia, doi:10.1088/1742-6596/1511/1/012082. [6] p.k. purwadi, yunus angga vantosa, sudi mungkasi, “efficiency and effectiveness of a rotation-shaped fin having the cross-section area dependent on the one dimensional position”, proceedings of the 1st international conference on science and technology for an internet of things, yogyakarta, indonesia, 20 october 2018, doi:10.4108/eai.20-9-2019.2292097. [7] t.d. nugroho and p.k. purwadi, “fins effectiveness and efficiency with position function of rhombus sectional area in unsteady condition”, aip conference proceedings 1788, 030034 (2017), doi:10.1063/1.4968287. [8] y.a. çengel, heat transfer: a practical approach (university of nevada, reno: the mcgraw-hill companies, inc., united states of america, 2008), pp. 163-164. https://www.researchgate.net/deref/http%3a%2f%2fdx.doi.org%2f10.4108%2feai.19-10-2018.2282543 https://www.researchgate.net/deref/http%3a%2f%2fdx.doi.org%2f10.4108%2feai.19-10-2018.2282543 https://doi.org/10.1063/1.5141704 https://iopscience.iop.org/journal/1742-6596 https://iopscience.iop.org/journal/1742-6596 https://iopscience.iop.org/volume/1742-6596/1511 https://iopscience.iop.org/issue/1742-6596/1511/1 https://iopscience.iop.org/issue/1742-6596/1511/1 https://eudl.eu/proceedings/icsti/2018 https://eudl.eu/proceedings/icsti/2018 international journal of applied sciences and smart technologies volume 4, issue 1, pages 59-74 p-issn 2655-8564, e-issn 2685-9432 59 the effect of motor parameters on the induction motor speed sensorless control system using luenberger observer bernadeta wuri harini1,* 1 department of electrical engineering, sanata dharma university, yogyakarta, indonesia *corresponding author: wuribernard@usd.ac.id (received 08-04-2022; revised 27-05-2022; accepted 29-05-2022) abstract the sensorless control system is a control system without a controlled variable sensor. the controlled variable is estimated using an observer. in this investigation, the sensorless control system is used to control induction motor speed. the observer that is used is the luenberger observer. one of the drawbacks of the sensorless control system is precision motor parameter values. in this research, the effect of induction motor parameters in a speed sensorless control system, i.e. resistance and inductance motor, will be investigated. the differences in induction motor parameters between the controller and the actual value affect the system response. the value differences of rr and rs that can be applied are a maximum of 50%. however, the small differences in the inductance value greatly affect the system response. to get a good response, the value differences of ls and lr are between -5% to +5%, while the difference in the value of lm is between -3% to +3%. this work is licensed under a creative commons attribution 4.0 international license international journal of applied sciences and smart technologies volume 4, issue 1, pages 59-74 p-issn 2655-8564, e-issn 2685-9432 60 keywords: inductance, induction motor, luenberger observer, resistance, sensorless 1 introduction a control system without a controlled variable sensor, often known as "sensorless control," is one controller that is still being researched. sensorless control systems evolved to overcome the challenges of sensor installation that sensor-based control systems faced. sensor-based control systems are widely used by researchers, such as those of z. alpholicy x., et al [1] and y. e. loho, et al [2]. sensors will drive up prices and complicate installation [3]. the controlled variable in this system is approximated from the plant’s current input using an observer rather than being measured directly by a sensor [4]. the stator current is used to estimate the motor speed using an observer. sensorless control will be used to control the speed of the induction motor in this investigation. the induction motor is one of the alternating current (ac) motors. the phase angle, as well as the modulo current (current vector), must be controlled while driving an ac motor [4]. it is not the same as a dc motor. the torque and flux that produce the ac motor current are decoupled in vector control so that they can be controlled independently. precision motor parameter values are one of the drawbacks of the sensorless control approach for controlling motor speed. for this sensorless speed control to work properly, parameter values must be clearly understood. as a result, a variety of approaches for determining induction motor parameter values have been offered by different researchers [5][6]. the importance of induction motor parameters is also underlined in the paper[7]. the disparity in parameter values causes inaccuracies in motor speed, according to this article. however, it is not indicated in these trials how much variances in motor parameter values will affect the speed controller. a motor speed error will occur if the motor parameters deviate from the real parameter [8]. in that research, it is used mras observer to estimate the permanent magnet synchronous motor (pmsm). international journal of applied sciences and smart technologies volume 4, issue 1, pages 59-74 p-issn 2655-8564, e-issn 2685-9432 61 in this research, the effect of induction motor parameters in a speed sensorless control system, i.e. resistance and inductance motor, will be investigated. to estimate the motor speed, it is used the luenberger observer. 2 research methodology this section provides the research methodology that we use in this work. 2.1. induction motor sensorless control system the block diagram of the system is shown in figure 1. each part is explained below. figure 1. block diagram of the system 2.1.1. induction motor mathematic model using clarke and park transforms, the three-phase mathematical model of the induction motor will be transformed into a two-phase mathematical model. the clarke transformation converts balanced three-phase values (vsa,sb,sc) into a two-phase stationary reference frame (α,β,0) using equation [9]:                           − −− =           sc sb sa s s v v v v v v 2 1 2 1 2 1 2 3 2 3 0 2 1 2 1 1 3 2 0   (1) where vsα and vsβ are the stator voltage in α,β reference frame. m induction inverterpwmdecoupling speed controller ip luenberger current controller dq abc dq abc isd* isq*ωr* usd usq isd isq vsq vsd ia va vb vc ib ic r   r ̂ s 1 international journal of applied sciences and smart technologies volume 4, issue 1, pages 59-74 p-issn 2655-8564, e-issn 2685-9432 62 the park transformation transforms a stationary reference frame into a rotating reference (d, q, 0) frame using the equation             − =          s s ee ee sq sd v v v v cossin sincos (2) where e is the electric angle of the motor, while vsd and vsq are the stator voltage in d-q reference frame. the induction motor mathematical model in d-q frame [10] is 𝑑 𝑑𝑡 𝑖𝑠𝑑 = 1 𝜎𝐿𝑠 𝑉𝑠𝑑 − ( 𝑅𝑠 𝜎𝐿𝑠 + (1−𝜎) 𝜎𝜏𝑟 ) 𝑖𝑠𝑑 + (1−𝜎) 𝜎𝜏𝑟 𝑖𝑟𝑑 + (1−𝜎)𝑁𝑝𝜔𝑟 𝜎 𝑖𝑟𝑞 + 𝜔𝑒 𝑖𝑠𝑞 (3) 𝑑 𝑑𝑡 𝑖𝑠𝑞 = 1 𝜎𝐿𝑠 𝑉𝑠𝑞 − ( 𝑅𝑠 𝜎𝐿𝑠 + (1−𝜎) 𝜎𝜏𝑟 ) 𝑖𝑠𝑞 + (1−𝜎) 𝜎𝜏𝑟 𝑖𝑟𝑞 + (1−𝜎)𝑁𝑝𝜔𝑟 𝜎 𝑖𝑟𝑑 + 𝜔𝑒 𝑖𝑠𝑑 (4) 𝑑 𝑑𝑡 𝑖𝑟𝑑 = − 𝑅𝑟 𝐿𝑟 𝑖𝑟𝑑 + 𝑅𝑟 𝐿𝑟 𝑖𝑠𝑑 + (𝜔𝑒 − 𝑁𝑝𝜔𝑟)𝑖𝑟𝑞 (5) 𝑑 𝑑𝑡 𝑖𝑟𝑞 = − 𝑅𝑟 𝐿𝑟 𝑖𝑟𝑞 + 𝑅𝑟 𝐿𝑟 𝑖𝑠𝑞 − (𝜔𝑒 − 𝑁𝑝𝜔𝑟)𝑖𝑟𝑑 (6) 𝑑 𝑑𝑡 𝜃𝑒 = 𝑁𝑝𝜔𝑟 + 𝑖𝑠𝑞 𝜏𝑟𝑖𝑚𝑟 (7) 𝑑 𝑑𝑡 𝜔𝑟 = 1 𝐽 (𝑇𝑒 − 𝑇𝐿 − 𝐵. 𝜔𝑟 ) (8) 𝑑 𝑑𝑡 𝜃𝑟 = 𝜔𝑟 (9) where 𝑖𝑠𝑑 is stator current in d-frame, 𝑖𝑠𝑞 is stator current in q-frame, 𝑖𝑟𝑑 is rotor current in d-frame, 𝑖𝑟𝑞 is rotor current in q-frame, 𝜃𝑒 is voltage vector angle, and 𝜔𝑟 is rotor speed. table 1 shows the parameter values of the induction motor that is used in this paper. the parameters are shown in figure 2 [11]. table 1. parameter values of induction motor symbol description values unit np pole pairs 2 pairs rr rotor resistance 2.9 ω rs stator resistance 2.76 ω ls stator inductance 0.2349 h lr rotor inductance 0.2349 h lm mutual inductance 0.2279 h international journal of applied sciences and smart technologies volume 4, issue 1, pages 59-74 p-issn 2655-8564, e-issn 2685-9432 63 figure 2. equivalent circuit in d-q frame 2.1.2. observer luenberger luenberger observer is one of observer that uses adaptive method to estimate the controlled variable [12]. the equations of the estimation [10] are 𝑑 𝑑𝑡 𝑖̂𝑠𝑑 = − ( 𝑅𝑠 𝜎𝐿𝑠 + (1−𝜎) 𝜎𝜏𝑟 ) 𝑖̂𝑠𝑑 + 𝜔𝑒 𝑖̂𝑠𝑞 + 𝐿𝑚 𝜎𝐿𝑠𝐿𝑟𝜏𝑟 �̂�𝑟𝑑 + 𝐿𝑚𝑁𝑝𝜔𝑟 𝜎𝐿𝑠𝐿𝑟 �̂�𝑟𝑞 + 1 𝜎𝐿𝑠 𝑣𝑠𝑑 + 𝑔1(𝑖𝑠𝑑 − 𝑖̂𝑠𝑑) − 𝑔2(𝑖𝑠𝑞 − 𝑖̂𝑠𝑞 ) (10) 𝑑 𝑑𝑡 𝑖̂𝑠𝑞 = −𝜔𝑒 𝑖̂𝑠𝑑 + 1 𝜎𝐿𝑠 (−𝑅𝑠 − 𝐿𝑚 2 𝜏𝑟𝐿𝑟 ) 𝑖̂𝑠𝑞 − 𝐿𝑚𝑁𝑝𝜔𝑟 𝜎𝐿𝑠𝐿𝑟 �̂�𝑟𝑑 + 𝐿𝑚 𝜎𝐿𝑠𝐿𝑟𝜏𝑟 �̂�𝑟𝑞 + 1 𝜎𝐿𝑠 𝑣𝑠𝑞 + 𝑔2(𝑖𝑠𝑑 − 𝑖̂𝑠𝑑) + 𝑔1(𝑖𝑠𝑞 − 𝑖̂𝑠𝑞 ) (11) 𝑑 𝑑𝑡 �̂�𝑟𝑑 = 𝑅𝑟 𝐿𝑟 𝐿𝑚𝑖̂𝑠𝑑 − 1 𝜏𝑟 �̂�𝑟𝑑 + (𝜔𝑒 − 𝑁𝑝𝜔𝑟 )�̂�𝑟𝑞 + 𝑔3(𝑖𝑠𝑑 − 𝑖̂𝑠𝑑 ) − 𝑔4(𝑖𝑠𝑞 − 𝑖̂𝑠𝑞 ) (12) 𝑑 𝑑𝑡 �̂�𝑟𝑞 = 𝐿𝑚 𝜏𝑟 𝑖̂𝑠𝑞 − (𝜔𝑒 − 𝑁𝑝𝜔𝑟)�̂�𝑟𝑑 + 1 𝜏𝑟 �̂�𝑟𝑞 + 𝑔4(𝑖𝑠𝑑 − 𝑖̂𝑠𝑑) + 𝑔3(𝑖𝑠𝑞 − 𝑖̂𝑠𝑞 ) (13) where 𝑔1 = (𝑘−1) 𝑘 (− 𝑅𝑠 𝜎𝐿𝑠 − 𝑅𝑟 𝜎𝐿𝑟 ) (14) 𝑔2 = − (𝑘−1) 𝑘 𝑁𝑝𝜔𝑟 (15) 𝑔3 = (𝑘−1) 𝑘(𝜏𝑟 2𝑁𝑝 2�̂�𝑟 2+1) ( 𝑅𝑠𝑅𝑟𝜏𝑟+𝐿𝑠𝑅𝑟−𝜎𝜏𝑟𝐿𝑠𝐿𝑟𝑁𝑝 2�̂�𝑟 2 𝐿𝑚 ) (16) 𝑔4 = (𝑘−1) 𝑘(𝜏𝑟 2𝑁𝑝 2�̂�𝑟 2+1) ( (𝑅𝑠𝐿𝑟𝜏𝑟+𝐿𝑠𝑅𝑟𝜏𝑟−𝜎𝐿𝑠𝐿𝑟)𝑁𝑝 2�̂�𝑟 𝐿𝑚 ) (17) international journal of applied sciences and smart technologies volume 4, issue 1, pages 59-74 p-issn 2655-8564, e-issn 2685-9432 64 the estimation speed (�̂�𝑟 ) is then calculated using equation �̂�𝑟 = 𝐾𝑝(�̂�𝑟𝑞 𝑒𝑖𝑠𝑑 − �̂�𝑟𝑑 𝑒𝑖𝑠𝑞 ) + 𝐾𝑖 ∫(�̂�𝑟𝑞 𝑒𝑖𝑠𝑑 − �̂�𝑟𝑑 𝑒𝑖𝑠𝑞 )𝑑𝑡 (18) where 𝑒𝑖𝑠𝑑 = 𝑖𝑠𝑑 − �̂�𝑠𝑑 (19) 𝑒𝑖𝑠𝑞 = 𝑖𝑠𝑞 − �̂�𝑠𝑞 (20) 2.2. decoupling and current sensor the direct-axis stator current 𝑖𝑠𝑑 (the rotor flux-producing component) and the quadrature-axis stator current 𝑖𝑠𝑞 (the torque-producing component) must be controlled separately for rotor flux-oriented vector control. the equations for the stator voltage components, on the other hand, are linked. 𝑢𝑠𝑑 , the direct axis component, and 𝑢𝑠𝑞 , the quadrature axis component, are both dependent on 𝑖𝑠𝑑 . for the rotor flux and electromagnetic torque, the stator voltage components 𝑢𝑠𝑑 and 𝑢𝑠𝑞 cannot be regarded as disconnected control variables. if the stator voltage equations are decoupled and the stator current components 𝑖𝑠𝑑 and 𝑖𝑠𝑞 are indirectly controlled by manipulating the induction motor's terminal voltages, the stator currents 𝑖𝑠𝑑 and 𝑖𝑠𝑞 can only be adjusted individually (decoupled control) [13]. the currents 𝑖𝑠𝑑 and 𝑖𝑠𝑞 are then controlled by proportional integral (pi) current sensor. the output current sensors are determined using equation [14] 𝑢𝑠𝑑 = (𝐾𝑖𝑑𝑝 + 𝐾𝑖𝑑𝑖 𝑠 ) (𝑖𝑠𝑑 ∗ − 𝑖𝑠𝑑 ) (21) 𝑢𝑠𝑞 = (𝐾𝑖𝑞𝑝 + 𝐾𝑖𝑞𝑖 𝑠 ) (𝑖𝑠𝑞 ∗ − 𝑖𝑠𝑞 ) (22) where 𝑖𝑠𝑑 = 1 𝑇𝑑𝑠+1 𝑖𝑠𝑑 ∗ (23) 𝑖𝑠𝑞 = 1 𝑇𝑑𝑠+1 𝑖𝑠𝑞 ∗ (24) international journal of applied sciences and smart technologies volume 4, issue 1, pages 59-74 p-issn 2655-8564, e-issn 2685-9432 65 2.1.3. speed controller the reference current in q-reference frame ( * sq i ) in (24) is controlled by the integral proportional (ip) speed controller. the equation of ip speed controller [15] is 𝑖𝑠𝑞 ∗ = ∫ 𝐾𝑖 (𝜔𝑟 ∗ − 𝜔𝑟 )𝑑𝑡 − 𝐾𝑝𝜔𝑟 (25) where kp and ki are the speed controller gain. 2.2. testing method the system is tested using matlab – simulink – cmex [16]. the simulation block diagram is shown in figure 3. figure 3. simulation system the values of the various parameters are inputted to the current controller using the input port in the figure. with the control parameters kp=0.5 and ki=1, the reference speed is 100 rad/s. the stator and rotor resistance, stator and rotor inductance, and mutual inductance characteristics are all employed. in this test, the values of motor parameters in the controller vary as shown in table 2, so they are different from the actual motor parameters. international journal of applied sciences and smart technologies volume 4, issue 1, pages 59-74 p-issn 2655-8564, e-issn 2685-9432 66 table 2. variation parameter values of induction motor parameter percentage change (%) values unit s r -50 1.38 ω -90 0.276 +50 4.14 +90 5.244 𝑅𝑟 -50 1.45 ω -90 0.29 +50 4.35 +65 4.785 𝐿𝑠 -4 0.225504 h -5 0.223155 +5 0.246645 +10 0.25839 𝐿𝑟 -4 0.225504 h -5 0.223155 +5 0.246645 +10 0.25839 𝐿𝑚 -2 0.223342 h -3 0.221063 -5 0.216505 -10 0.20511 +2.5 0.233598 +3 0.234737 3 results and discussion the simulation result of the system using the right parameters is shown in figure 4. it is shown that the actual speed (𝜔𝑟 ) can reach the reference speed (𝜔𝑟 ∗), i.e. 100 rad/s. although the estimated speed at the transient is slightly different from the actual speed, the estimated speed has the same value as the actual and reference speed at a steady-state. this means that the sensorless control system is working well. the simulation result of the system using various parameters values are described in figures 5 9. international journal of applied sciences and smart technologies volume 4, issue 1, pages 59-74 p-issn 2655-8564, e-issn 2685-9432 67 figure 4. simulation result with normal parameter values figure 5. simulation result with the variation of rs parameter values figure 5 shows the value of rs on the controller being varied. the figure shows that when the value of rs in the controller is reduced to 90% (figures 5.a and c), the actual speed can reach the reference speed, which is 100 rad/s, although there are differences in the transient conditions. when rs is enlarged by 50% (figure 5.b), the actual speed can international journal of applied sciences and smart technologies volume 4, issue 1, pages 59-74 p-issn 2655-8564, e-issn 2685-9432 68 reach the reference speed, even though the estimated speed is oscillating. however, if the value of rs is enlarged again, a steady state error occurs, where there is a difference between the actual and the reference speed, although only slightly (figure 5.d). in this condition, the estimation speed oscillates with increasing amplitude. thus, to get a good response, the difference in the value of rs that can be applied is a maximum of +50%. figure 6. simulation result with the variation of rr parameter values the condition for the change in the value of rs is almost the same as the condition for the change in the value of rr, as shown in figure 6. the figure shows that when the value of rr in the controller is reduced to 90% (figures 6.a and c), the actual speed can reach the reference speed, namely 100 rad/s, although there is a difference in the transient conditions. in addition, when the rr value is reduced, overshoot will occur (figure 6.c), although the overshoot percentage is only slightly. when rr is enlarged by 50% (figure 5.b), the actual speed can reach the reference speed. however, if the value of rr is enlarged again by 65%, the estimated speed and the actual speed oscillate (figure 6.d). thus, to get a good response, the difference in the value of rr that can be applied is a maximum of 50%. international journal of applied sciences and smart technologies volume 4, issue 1, pages 59-74 p-issn 2655-8564, e-issn 2685-9432 69 figure 7. simulation result with the variation of ls parameter values the effect of differences in resistance values is different from differences in inductance values, as illustrated in figures 7 9. in the three figures, to get a good response, the difference in inductance values between the inductance values in the controller and the actual is very small. the difference in the values of ls (figure 7) and lr (figure 8) is between -5% to +5% (figure 7.a c and figure 8.a -c). when the difference gets bigger, i.e. 10%, the estimation speed oscillates (figs 7.d and 8.d). in the two figures, it appears that the actual speed time to achieve stability (settling time) is longer than before. international journal of applied sciences and smart technologies volume 4, issue 1, pages 59-74 p-issn 2655-8564, e-issn 2685-9432 70 figure 8. simulation result with the variation of lr parameter values the small differences in the value of mutual inductance (lm) between the controller and the actual value of the motor parameters have greatly affected the system response, as illustrated in figure 9. it appears that to get a good response, the differences in the inductance value between the inductance value in the controller and the actual is smaller than ls and lr. the difference in lm values that can be applied is between -3% to +3% (figure 9.a, c, d, f). as the difference gets bigger, the estimation speed oscillates (figs 9.b and d). in the two figures, it appears that the actual speed time to achieve stability (settling time) is longer than before. when the lm value is enlarged (more than +3%) the system becomes an error. therefore, the recommended lm differences value is -3% to +3%. international journal of applied sciences and smart technologies volume 4, issue 1, pages 59-74 p-issn 2655-8564, e-issn 2685-9432 71 figure 9. simulation result with the variation of lm parameter values 4 conclusion the differences in induction motor parameters between the controller and the actual value affect the system response. the value differences of rr and rs that can be applied are a maximum of 50%. however, the small differences in the inductance value greatly affect the system response. to get a good response, the value differences of ls and lr are between -5% to +5%, while the difference in the value of lm is between -3% to +3%. international journal of applied sciences and smart technologies volume 4, issue 1, pages 59-74 p-issn 2655-8564, e-issn 2685-9432 72 references [1] z. alpholicy x., s. s. bhandari, p. p. dsouza, and d. c. raina, “personal assistant robot,” int. j. appl. sci. smart technol., 3(2), 145–152, 2021. [2] y. e. loho, d. lestariningsih, and p. r. angka, “alarm system and emergency message from wheelchair user emergency condition,” int. j. appl. sci. smart technol., 3(2), 171–184, 2021. [3] b. w. harini, f. husnayain, a. subiantoro, and f. yusivar, “a synchronization loss detection method for pmsm speed sensorless control,” j. teknol., 82(4), 47– 54, 2020. [4] p. vas, sensorless vector and direct torque control. oxford uviversity press, 1998. [5] o. avalos, e. cuevas, and j. gálvez, “induction motor parameter identification using a gravitational search algorithm,” computers, 5(2), 2016. [6] a. c. megherbi, h. megherbi, k. benmahamed, a. g. aissaoui, and a. tahour, “parameter identification of induction motors using variable-weighted cost function of genetic algorithms,” j. electr. eng. technol., 5(4), 597–605, 2010, [7] s. yamamoto and h. hirahara, “effect of parameter tuning on driving performance of a universal-sensorless-vector-controlled closed-slot cage induction motor,” 2019 22nd int. conf. electr. mach. syst. icems 2019, 2019. [8] bernadeta wuri harini, “pengaruh parameter motor pada sistem kendali tanpa sensor putaran,” j. nas. tek. elektro dan teknol. inf., 10(3), 236–242, 2021. [9] a. glumineau and j. de león morales, sensorless ac electric motor control. springer, 2015. [10] f. yusivar and n. avianto wicaksono, “simulasi mesin induksi tanpa sensor kecepatan menggunakan pengendali orientasi vektor,” j. nas. tek. elektro dan teknol. inf., 4(4), 2016. [11] r. ridwan, e. purwanto, h. oktavianto, m. r. rusli, and h. toar, “desain kontrol kecepatan motor induksi tiga fasa menggunakan fuzzy pid berbasis idirect field oriented control,” j. integr., 11(2), 146–155, 2019. [12] j. agrawal and s. bodkhe, “low speed sensorless control of pmsm drive using international journal of applied sciences and smart technologies volume 4, issue 1, pages 59-74 p-issn 2655-8564, e-issn 2685-9432 73 high frequency signal injection,” 12th ieee int. conf. electron. energy, environ. commun. comput. control (e3-c3), indicon 2015, 4–9, 2016. [13] f. semiconductor, “3-phase ac induction motor vector control using a 56f8300 device,” memory, 2005. [14] r. gunawan and f. yusivar, “reducing estimation error due to digitizing problem in a speed sensorless control of induction motor,” iecon proc. (industrial electron. conf., 2005(1), 1677–1682, 2005. [15] f. yusivar, h. haratsu, t. kihara, s. wakao, and t. onuki, “performance comparison of the controller configurations for the sensorless im drive using the modified speed adaptive observer,” iee conf. publ., 475, 194–200, 2000. [16] f. yusivar and s. wakao, “minimum requirements of motor vector control modeling and simulation utilizing c mex s-function in matlab/simulink,” proc. int. conf. power electron. drive syst., 1, 315–321, 2001. international journal of applied sciences and smart technologies volume 4, issue 1, pages 59-74 p-issn 2655-8564, e-issn 2685-9432 74 this page intentionally left blank international journal of applied sciences and smart technologies international journal of applied sciences and smart technologies volume 3, issue 2, pages 241–256 p-issn 2655-8564, e-issn 2685-9432 241 independence test and plots in correspondence analysis to explore tracer study data endang sri kresnawati1, irmeilyana1,*, ali amran1, danny matthew saputra2 1department of mathematics, faculty of mathematics and natural science, university of sriwijaya, indralaya, south-sumatra, indonesia 2department of informatics engineering, faculty of computer science, university of sriwijaya, south-sumatra, indonesia *corresponding author: irmeilyana@unsri.ac.id (received 17-11-2021; revised 30-12-2021; accepted 31-12-2021) abstract the results of the exploration of tracer study data can be used as information about the career of graduates and the relevance of work to the field of study as well as the competencies obtained before graduation. the question items discussed were a description of the time and process of looking for a job, the length of time to get the first job, the relationship between length of study, gender, field of work, total income, alumni's perception of the closeness of the field of study to work, the suitability of the level of education on the job, and average level of competence. the aim of this study was to analyze the relationship between these variables in the 2020 tracer study data from graduates of all faculties at sriwijaya university. respondents studied were 2,669 people. the method used is descriptive statistics, biplot analysis, independence test and plots by simple correspondence analysis. respondents' perceptions of the suitability of the level of education in employment are related to gender and also with respondents' perceptions of the closeness of the field of study to the field of work. meanwhile, international journal of applied sciences and smart technologies volume 3, issue 2, pages 241–256 p-issn 2655-8564, e-issn 2685-9432 242 respondents' perceptions of the closeness of the field of study with work are related to the field of work. the average length of study, the average number of job applications, the number of companies or agencies that responded to applications, and invited interviews for female respondents were lower than male respondents. keywords: alumni perception, to explore, field of work, field of study, tracer study data. 1 introduction data from alumni resulting from tracer studies is useful for obtaining information that can be used for higher education development, to evaluate the relevance of hard skills, soft skills, and internal / external factors obtained by alumni when they become students and work [1]. the career development center (cdc) is a character and career development center in unsri, where the cdc was formed in 2013 to respond to the low achievement of tracking points for graduates who are blocked by aipt forms. cdc has tracked alumni from 10 faculties at sriwijaya university starting from the alumni in 2013 to 2020. the tracer study report can be seen at [2], [3], [4], [5], [6], [7]. cdc of sriwijaya university (unsri) conducted a tracer study to study alumni early careers, as well as obtaining alumni feedback for improving the learning system in unsri and evaluating / developing a curriculum that meets stakeholder expectations and market needs. apart from tracer studies, cdc also provides other services, including: unsri career expo, soft skills training, online assessment, career training, and career counseling [6]. reference to learn various things related to career center and its services, also to study the solution to the problems of graduates and employment faced such as problems of alignment of the world of education with the world of work can be seen in [8], [9], [10]. interpretation of the questionnaire results in the form of descriptive statistics from the data, both in the form of numbers (percentages), graphics, and the interpretation is very helpful in providing information for further analysis. the results of the analysis are very useful for the successful implementation of the tracer study. tracer study data can international journal of applied sciences and smart technologies volume 3, issue 2, pages 241–256 p-issn 2655-8564, e-issn 2685-9432 243 be big data which consists of many objects and many variables, so to extract as much information as possible from the data, it is necessary to use other analysis techniques, including multivariate analysis. in [11], it was obtained a lot of information regarding the comparison between fkip respondents (alumni) and fmipa respondents (alumni) in each of the 4 departments/study programs, based on data from the 2013 to 2016 tracer study. information obtained includes: the relationship between gpa, duration thesis, and length of study; profiles of alumni who received scholarships and those who did not during college were reviewed from the gpa, length of thesis, and length of study; field of work of each alumni; the relationship between the gpa and the length of time getting the first job and the relationship between the gpa and the suitability of the level of education in the job; the relationship between the closeness of the field of study and the suitability of the level of education with the job; alumni perceptions about the contribution of higher education (unsri) to all competency items owned by alumni; the relationship between competency groups and gpa, level of education, and length of time to get a job. information on the relationship between the factors studied in [12] obtained after studying the descriptive statistics of the data obtained from the results of the tracer study. the description of the alumni of each faculty and the comparison of alumni between faculties in unsri will provide a lot of input not only regarding the competencies of alumni needed by the world of work but also regarding the steps of all academicians to work together to prepare higher quality graduates in accordance with the vision and mission of the university which of course must be supported by vision and mission of the faculty and departments / study programs. in [13], it was analyzed the relationship between gpa and the suitability of education level with the field of work of sriwijaya university alumni from 5 faculties, namely fisip, fmipa, fe, fh, and ft based on 2019 tracer study data. the perception of the majority of respondents to the level of education and also the closeness of the field of study required in their work is not related to the gpa. only in ft respondents, there is a relationship between gpa and the closeness of the field of study on the job. based on [14], in 5 faculties data, both in the form of graduates and respondent data, international journal of applied sciences and smart technologies volume 3, issue 2, pages 241–256 p-issn 2655-8564, e-issn 2685-9432 244 women's gpa is higher than men. on the other hand, the length of study and income of women is lower than that of men. the average gpa of fh alumni is the highest compared to other faculties. the average length of study of fisip alumni is the highest compared to other faculties. the average income and total income of ft respondents were the highest compared to respondents in other faculties. meanwhile in [15], based on the results of the analysis in the boxplot form, it was found that gpa did not affect the income and field of work of the 2010 itb alumni. in [16], it was analyzed the comparison of 4 majors (study programs) at the faculty of mathematics and natural sciences (fmipa) and the faculty of teacher training and education (fkip) in terms of the relationship between gender variables, the average alumni perception of the competencies possessed and needed in the field of work, length of time study, length of time to get a first job, income, field of work, alumni's perception of the suitability of education level with the field of work, and respondents' perceptions of the closeness of the field of study to the field of work. this research is based on tracer study data from 2020 on each fkip and fmipa respondents, as many as 216 and 239 respondents. in [17], it was examined the relationship between alumni perceptions of competencies mastered with competencies needed by the world of work for unsri graduates in 2018. there are 8 out of 29 competencies that should be further improved. in addition, the types of competencies that are further enhanced between female graduates and male graduates are different. this study aims to analyze the relationship between several variables from the question items of the tracer study questionnaire simultaneously and explore further the data using the objects of all 2020 tracer study respondents from 10 faculties at unsri. quantitative variable data were analyzed descriptively and exploratory using biplot analysis. variable data of qualitative type, nominal and ordinal scale were analyzed using independence test and plots in correspondence analysis. independence test used chi squares ( 2) test. the output of correspondence analysis include symmetric and asymmetric plot. because this study uses data from all respondents from all faculties, the results of the analysis can describe in general the characteristics of unsri graduates international journal of applied sciences and smart technologies volume 3, issue 2, pages 241–256 p-issn 2655-8564, e-issn 2685-9432 245 in 2018 in their careers and the relevance between the competencies obtained from college and their work. 2 research methodology this research is a case study, using secondary data from questionnaires in the 2020 tracer study conducted by cdc unsri. respondents in the 2020 tracer study are alumni who graduated in 2018. the data used includes the results of tracer studies in 10 faculties at unsri. this study only uses answers to several questionnaire questions used for descriptive analysis and exploratory analysis, namely gender, length of study, length of time looking for a job, number of job applications, number of responses to job applications, number of interview calls, length of time getting the first job, field of work, main income, total income, respondent's perception of the most appropriate level of education for alumni's work, closeness of field of study to alumni's job, average respondent's perception of competency level. alumni (graduates) who filled out the tracer study questionnaire were declared as respondents. the analytical technique used is descriptive statistics, biplot analysis (including correlation between variables), chi square test ( 2), and simple correspondence analysis. the steps taken in the combined data of all faculties are: 1. select the required questionnaire questions as variables. 2. compile a data matrix from the answers to the questionnaire questions in step 1 with the objects being all respondents from 10 faculties. the data matrix variables include: length of time looking for a job, both before graduation and after graduation (f3), length of time getting the first job (f5), number of job applications (f6), number of companies responding (f7), number of job interview calls ( f7a), field of work (f11), income (f13), alumni's perception of the closeness of the field of study to alumni's work (f14), the suitability of the most appropriate level of education for alumni's work (f15), and the average alumni perception of the level of competencies that are mastered and needed in the field of work (f17). international journal of applied sciences and smart technologies volume 3, issue 2, pages 241–256 p-issn 2655-8564, e-issn 2685-9432 246 3. develop a new data matrix by removing data on respondents who did not fill in questions about income. 4. add gender and length of study variables. 5. do descriptive statistics. 6. perform a biplot analysis as a graphical representation of a data matrix whose variables are quantitative. 7. perform the chi square test a. arrange column and row categories in the contingency table. b. calculate the frequency of cross-relation between column and row categories. c. perform the chi square test on the contingency table. d. if the cell frequency from the contingency table is less than 5, then the categories can be merged, or if not, skip to step 8. 8. perform a correspondence analysis on the relationship between two interrelated variables based on the results of step 7. 9. interpretation of results. data processing is done with the help of minitab 19 software. 3 results and discussion the tracer study data for 2020 came from 3,850 respondents, consisting of 2018 graduates in 10 faculties at unsri. the data is compiled in the form of a new data matrix, which consists of 2,669 respondents with variables as in step 2. this new data matrix is formed with the assumption that respondents did not fill in their income (either because they have not found a job, are not working, or are graduates who are continuing their studies) not included in the data matrix. so, there are only 2,669 respondents who are all working. furthermore, the length of study and gender variables were added to the data matrix. table 1 displays descriptive statistics from the answers to several questionnaire questions. the majority of respondents looked for work 1.63 months after graduation and got their first job 5.94 months after graduation. there are only 46 respondents who are looking for work, but did not fill in the question about the length of time they got their first job. international journal of applied sciences and smart technologies volume 3, issue 2, pages 241–256 p-issn 2655-8564, e-issn 2685-9432 247 table 1. descriptive statistics of variables regarding the process of respondents looking for and getting a job variable ne %age mean stdev median f302a 561 21.4 6.992 13.671 2 f303b 2061 78.6 1.625 2.8753 1 bgradc 290 11.3 11.14 17.52 3 agradd 2286 88.7 5.936 4.887 5 note: alength of time to find a job before graduating (in months) blength of time looking for a job after graduation (in months) clength of time to get first job before graduating (in months) dlength of time to get first job after graduation (in months) enumber of respondents comparison of the length of study, the status of the job search process, income, and the average perception of alumni on the level of competence can be seen in table 2. male respondents had the number of job applications (f6), the number of companies that responded (f7), the number of interviews (f7a), which is more than female respondents. from the number of job applications that are made, only about a third are responded to by companies (or users). from the number of users (companies/agencies) who responded, only about half called respondents for job interviews. table 2. descriptive statistics of study duration, status of the job search process, income, and perception of competence variable gender n mean stdev median length of study (in years) 2669 4.58 1.08 4 0 1510 4.42 0.95 4 1 1159 4.79 1.19 5 f6 2601 24.19 62.34 10 0 1484 19.68 37.94 10 1 1117 30.20 84.12 10 f7 2596 7.86 11.92 5 0 1484 6.95 8.58 4 1 1112 9.06 15.20 5 f7a 2586 4.67 6.00 3 0 1480 4.20 5.00 3 1 1106 5.30 7.09 3 income 2669 4007294 4399354 3100000 0 1510 3269989 3244314 3000000 1 1159 4967890 5407720 4000000 international journal of applied sciences and smart technologies volume 3, issue 2, pages 241–256 p-issn 2655-8564, e-issn 2685-9432 248 total income 2669 4546057 4781143 3700000 0 1510 3703070 3514537 3000000 1 1159 5644341 5868338 4500000 s-kk 2669 3.84 0.48 3.83 0 1510 3.83 0.47 3.83 1 1159 3.84 0.49 3.83 s-pt 2669 3.79 0.59 3.79 0 1510 3.81 0.58 3.83 1 1159 3.76 0.59 3.79 based on table 2, there are 1,510 (57%) female respondents and 1,159 (43%) male respondents. the length of study for male respondents (average 4.79 years) is higher than the length of study for female respondents (average 4.42 years). male respondents also have higher average income and total income than female respondents. the average respondent's perception of the level of competency mastered (with notation s-kk) and the competencies required by the world of work (notation s-pt) are more likely to be the same. the correlation between the variables in table 2 can be seen in table 3 and the biplot graph in figure 1. table 3. correlation between length of study, status of job search process, income, and perception of competence length of study (in years) f6 f7 f7a income total income s-kk f6 0.042 f7 -0.016 0.722 f7a -0.024 0.383 0.721 income 0.007 0.058 0.080 0.088 total income 0.007 0.051 0.077 0.089 0.945 s-kk -0.023 0.019 0.018 0.039 0.046 0.042 s-pt -0.023 -0.018 -0.024 -0.003 0.034 0.027 0.773 based on table 3, income is only correlated (very high) with total income. the number of job applications, the number of users who responded to the application, and the number of interview calls were highly correlated with each other. likewise, a high correlation occurs between the average respondent's perception of the level of competence mastered with the competencies needed by the world of work. the same interpretation can be seen in figure 1a. international journal of applied sciences and smart technologies volume 3, issue 2, pages 241–256 p-issn 2655-8564, e-issn 2685-9432 249 1a. correlation between variables 1b. biplot figure 1. biplot of length of study, status of job search process, income, and perception of competence based on figure 1a, the biplot can represent a data variation of 52.3%. the first component is dominant represented by the respondent's process status variable in looking for work (f6, f7, and f7a). while the second component is dominant represented by income and total income variables. based on the distribution of respondents' positions tend to spread in the direction of the variable vectors. only a small proportion of respondents have a high income, their number of job applications are responded and get the opportunity to be interviewed. furthermore, the relationship between several variables of the data matrix is explored using the independence test, i. e. by using chi squares test. if the test results state that there is a relationship between the two variables, the process is continued with a simple correspondence analysis. the cells in the contingency table represent the frequency of the number of respondents from the cross-relationship between the row variable category and the column variable category. figure 2 is the partial output of the chi square test on the relationship between length of study and gender. international journal of applied sciences and smart technologies volume 3, issue 2, pages 241–256 p-issn 2655-8564, e-issn 2685-9432 250 chi-square test for association: length of study (in years); gender rows: length of study (in years) columns: gender 0 1 all 2 41 22 63 3 33 34 67 4 931 522 1453 5 349 350 699 6 67 78 145 7 86 122 208 8 3 31 34 all 1510 1159 2669 cell contents count chi-square test chi-square df p-value pearson 106,684 6 0,000 likelihood ratio 110,179 6 0,000 figure 2. the output of the chi squares test on the relationship between length of study and gender based on figure 2, the majority of respondents graduated in 4 years and were female respondents (931 people or around 35%). the value of 2 count (106.684) > 2 table (0.05; 6) (12.592); namely the relationship between the length of study with gender. the same thing can be seen from the p-value < 0.05. chi-square test for association: level of education; gender rows: level of education columns: gender 0 1 all 1 28 35 63 2 1475 1113 2588 4 7 11 18 all 1510 1159 2669 cell contents count chi-square test chi-square df p-value pearson 6,250 2 0,044 likelihood ratio 6,183 2 0,045 figure 3. the output of the chi squares test on the relationship between respondents' perceptions of education level and gender based on figure 3, the majority of female respondents have the perception that the level of education that is most suitable for their job is at “the same level” (there are international journal of applied sciences and smart technologies volume 3, issue 2, pages 241–256 p-issn 2655-8564, e-issn 2685-9432 251 1,475 people or 55%). the value of 2 count (6.25) > 2 table (0.05; 2) (5.99); namely the existence of a relationship between perceptions of the suitability of the level of education on the job with gender. the same thing can be seen from the p-value < 0.05. furthermore, the same way is also carried out to analyze the close relationship between the categories on the two variables, so that the recapitulation is obtained as in table 4. table 4. recapitulation of chi square test on correspondence analysis results no row variable column variable majority category (%) 2count value 2table value 2 test results conclusion 1 length of study gender 4 years for female respondents (35) 106.68 20,05; 6 (= 12.59) reject h0 there is a relationship 2 f15a gender the same level for female respondents (55) 6.25 20,05; 2 (= 5.99) reject h0 there is a relationship 3 f14b gender very close to female respondents (21.5) 2.484 20,05; 4 (= 9.49) accept h0 no relationship 4 f14 f15 very close and same level (36) 39.809 20,05; 8 (= 1551) reject h0 there is a relationship 5 f11c f15 same rate in private companies (54) 6.256 20,05; 8 (=15.51) accept h0 no relationship *) 6 f14 f11 very close to private companies (21) and government agencies (16) 212.95 20,05; 16 (=26.3) reject h0 there is a relationship note: athe most appropriate level of education for the respondent's job bclose relationship between the field of study and work cfield of work *) the test results of both categories of variables are invalid based on table 4, only 2 forms of relationship from the chi squares test whose 2value < 2 table; namely the relationship between gender and the respondent's perception of the closeness of the field of study on the job, and also the relationship between the field of work and the respondent's perception of the suitability of the level of education on the job. furthermore, the existence of a relationship between row variables and column variables whose categories are more than 2 can be described international journal of applied sciences and smart technologies volume 3, issue 2, pages 241–256 p-issn 2655-8564, e-issn 2685-9432 252 through the output of correspondence analysis. the output of the correspondence analysis includes the distance of chi squares and the total inertia of the two axes on the graph. the output plot of this simple correspondence analysis has a total inertia of 100% (figure 4a) and 95.7% (figure 4b), so it is very representative in presenting data diversity. a. the relationship of respondents' perceptions on closeness of fields of study and suitability of education level with employment b. the relationship between the field of work and respondents' perceptions of the closeness between the field of study and work figure 4. plot of the relationship between two variables of correspondence analysis results international journal of applied sciences and smart technologies volume 3, issue 2, pages 241–256 p-issn 2655-8564, e-issn 2685-9432 253 based on figure 4a, respondents who have the perception that their field of study is “not closely” related to their work, tend to have the perception that their work does “not need higher education”. meanwhile, respondents who have the perception that their field of study is related "quite closely" to "very closely" with their work, tend to have the perception that their work is "the same" with their level of education. based on figure 4b, respondents who work in government agencies (including bumn) and the private sector have the perception that the field of study is “very closely” related to work. meanwhile, respondents who work as entrepreneurs have the perception that the field of study is not closely related to work. 4 conclusion based on the results and discussion, it is concluded that the majority of respondents are looking for and getting their first job after graduation. the number of job applications, the number of users who responded to the application, and the number of interview calls were highly correlated with each other. likewise, a high correlation occurs between the average respondent's perception of the level of competence mastered with the competencies needed by the world of work. male respondents have a higher length of study, average income, and total income than female respondents. the average respondent's perception of the level of competence mastered and the competencies needed by the world of work are more likely to be the same. based on the independence test, gender is related to the length of study and respondents' perceptions of the suitability of the level of education on the job. there is a relationship between respondents' perceptions of the closeness of the field of study and the suitability of the level of education with the field of work. the results of the correspondence analysis show that respondents who have the perception that the field of study is not closely related to their work, tend to have the perception that their work "does not need higher education", and work as entrepreneurs. respondents who work in government agencies (including bumn) and the private sector have the perception that the field of study is “very closely” related to work. international journal of applied sciences and smart technologies volume 3, issue 2, pages 241–256 p-issn 2655-8564, e-issn 2685-9432 254 the study describes the general characteristics of all respondents from 10 faculties at unsri based on 10 question items on the tracer study questionnaire. for further research, it is better to examine the comparison of these characteristics in each faculty at unsri. acknowledgements authors wishing to acknowledge assistance or encouragement from our discussion group and also special thanks for the staff of cdc (career development center) university of sriwijaya that had provided tracer study data. references [1] divisi riset itb career center. tracer study itb tahun 2017. itb, bandung, 2017. [2] cdc unsri. tracer study universitas sriwijaya tahun 2016 (lulusan tahun 2014). universitas sriwijaya, indralaya, 2016. [3] cdc unsri. tracer study universitas sriwijaya tahun 2017 (lulusan tahun 2015) universitas sriwijaya, indralaya, 2017. [4] cdc unsri. tracer study universitas sriwijaya tahun 2018 (lulusan tahun 2016) universitas sriwijaya, indralaya, 2018. [5] cdc unsri. tracer study universitas sriwijaya tahun 2019 (lulusan tahun 2017) universitas sriwijaya, indralaya, 2019. [6] cdc unsri. tracer study universitas sriwijaya tahun 2020 (lulusan tahun 2018) universitas sriwijaya, indralaya, 2020. [7] cdc universitas sriwijaya. http://cdc.unsri.ac.id [8] proceedings of the indonesian career center network (iccn) 2019. universitas mulawarman, samarinda, 2019. [9] proceedings of the indonesian career center network (iccn) summit 3. surabaya, 2018. [10] proceedings of the indonesian career center network (iccn) summit 2. bogor, 2017. international journal of applied sciences and smart technologies volume 3, issue 2, pages 241–256 p-issn 2655-8564, e-issn 2685-9432 255 [11] a. amran, irmeilyana, a. desiani, and r. zulfahmi, “characteristics comparison on fmipa and fkip alumni of sriwijaya university based on relationship between gpa, field of work, and length time to get first job,” international conference 15th icmsa, bogor: ipb, 2019. [12] a. amran, irmeilyana, a. desiani, r. p. oktarian, “relationship between gpa, length of study, and competency with the length of time to get a job at the alumni of the faculty of mathematics and natural sciences, university of sriwijaya proceedings of 3rd forum in research, science, and technology (first),” part of series: advances in social science, education and humanities research, 20–28, 2020. [13] a. amran, irmeilyana, ngudiantoro. “hubungan antara ipk dengan kesesuaian tingkat pendidikan dan bidang studi pada pekerjaan alumni,” jurnal penelitian sains, 23(2), 67–77, 2021. [14] a. amran, irmeilyana, and ngudiantoro, “relationship among gender, gpa, length of study, and alumni income of sriwijaya university paper was presented,” the virtual conference of the 10th international seminar on new paradigm and innovation of natural sciences and its application (isnpinsa), 2020. [15] i. i. sari and a. d. adrianto, “pengaruh nilai indeks prestasi (ip) terhadap pekerjaan alumni itb (studi kasus alumni itb angkatan 2010),” indonesia career center network summit 3, 89–93, 2018. [16] e. s. kresnawati, irmeilyana, a. amran, d. m. saputra, “profil alumni fmipa dan fkip universitas sriwijaya ditinjau dari variabel dan persepsi pada pekerjaan,” aksioma, 12(2), 213–224, 2021. [17] a. amran, irmeilyana, e. s. kresnawati, d. m. saputra, “eksplorasi data persepsi alumni pada tingkat item-item kompetensi dari hasil tracer study unsri tahun 2020,” infomedia, 6(1), 1–8, 2021. international journal of applied sciences and smart technologies volume 3, issue 2, pages 241–256 p-issn 2655-8564, e-issn 2685-9432 256 this page intentionally left blank international journal of applied sciences and smart technologies international journal of applied sciences and smart technologies volume 3, issue 2, pages 225–240 p-issn 2655-8564, e-issn 2685-9432 225 subgroup graphs of finite groups ojonugwa ejima 1,* , abor isa garba 1 , kazeem olalekan aremu 1 1 department of mathematics, usmanu danfodiyo university, sokoto, nigeria * corresponding author: unusoj1@yahoo.com (received 08-10-2021; revised 29-12-2021; accepted 29-12-2021) abstract let be a finite group with the set of subgroups of denoted by , then the subgroup graphs of denoted by is a graph which set of vertices is such that two vertices , are adjacent if either is a subgroup of or is a subgroup of . in this paper, we introduce the subgroup graphs associated with . we investigate some algebraic properties and combinatorial structures of subgroup graph and obtain that the subgroup graph of is never bipartite. further, we show isomorphism and homomorphism of the subgroup graphs of finite groups. keywords: subgroup, graph, finite group 1 introduction one of the mathematical tools for studying symmetries of object is group theory, hence, several structures in the field of algebra are depicted through groups. this mathematical concept has evolved rapidly since it discovery in the sixteenth century. according to [1], the rebirth of the axiomatic method and the view of mathematics as a human activity in the nineteenth century forms the major development that change the bearing on the evolution of group theory as a mathematical concept. [1], further noted that the evolution had caused the previous classical algebra polynomial equations international journal of applied sciences and smart technologies volume 3, issue 2, pages 225–240 p-issn 2655-8564, e-issn 2685-9432 226 transited to the modern algebra of axiomatic systems of the nineteenth century. meanwhile, this concept has been applied in the field of physics, chemistry and biology, (see [2], [3], [4], [5]) for details. in the same vein, in the last two decades, many studies have related graphs to group theory, providing a more easier way to visualize the concept of group; this relation brings together two important branches of mathematics, and has opened up a new wave of research with a better understanding of the fields. many years after euler’s research work on the bridges of konigsberg city, cayley [6] used the generators of a finite group to define a graphical structure called the cayley graph of finite group , he further showed that every group of order can be represented by a strongly connected diagraph of n vertices [7]. afterwards, in the last few decades, his view of diagraph has since been extended to different and modified graph of algebraic structures. hence, more algebraic studies through the properties of these modified graphs have become topics of interest to many around the globe (see [8], [9], [10], [11], [12], [13], [14], [15]). this study, the subgroup graph of finite groups like [8], [9], [10], [11], [12], [13], [14], [15], will focus on finite groups , however, the choice of it vertex set is the subgroups of . in the literature, vertex set of graphs of finite groups are always the elements n g, a deviation from this norm is the motivation for this study. 1.1. preliminaries. we state some known and useful results which will be needed in the proof of our main results and understanding of this paper. for the definitions of the basic terms and results given in this section ([16], [17], [18], [19], [20], [21], [22], [23]). a graph is a combinatorial structure formed by finite non-empty set where is the set of vertices viewed as points and is the set of edges viewed as line joining the points. the cardinality of is called the order of while the cardinality of is called the size of . the degree of a vertex in a graph denoted by is the number of edges incident to it, that is the number of edges connecting . a graph is said to have parallel edges if there are more than one edges which join the same pair of international journal of applied sciences and smart technologies volume 3, issue 2, pages 225–240 p-issn 2655-8564, e-issn 2685-9432 227 distinct vertices. a loop on the other hand is an edge that joins a vertex to itself while a walk of length in a graph with vertex set consist of an alternating sequence of vertices and edges consecutive elements of which are incident, that begins and ends with a vertex. definition 1.1. [20] a complete graph is a simple undirected graph in which has at least one vertex and every arbitrary pair of distinct vertices is joint by a unique edge. while a connected graph on the other hand is a graph where there is an edge between every pair of vertices. remark 1.2. note that every complete graph is necessary connected but connected graphs are not necessary complete. definition 1.3. [20] a walk in a connected graph that visits every vertex of the graph exactly once without repeating the edges is called hamiltonian path. if this walk starts and ends at the same vertex, the walk is called a hamiltonian circuit or cycle. remark 1.4. a graph that contains a hamiltonian cycle is said to be hamiltonian. theorem 1.5. (sylow’s first theorem) [21] let be a finite group of order , where is a prime, and are positive integers and . then has a subgroup of order for all satisfying . definition 1.6. [20] in graph theory, a regular graph is a graph where each vertex has the same number of neighbors, i.e. every vertex has the same degree or valency; a regular graph can be an x-regular graph where every vertex of the graph have the same degree . definition 1.7. [20] the distance between two vertices is the length of the shortest path between and and it’s denoted by eccentricity of a vertex in a graph is define as { }. the minimum and maximum eccentricity in a graph are called radius, rad and diameter, diam( ) of the graph respectively. lemma 1.8. [20] let g and g′ be any two finite groups. if then and but the converse is not true. definition 1.9. [21] a group consists of a set with a binary operation * on satisfying the following four conditions: international journal of applied sciences and smart technologies volume 3, issue 2, pages 225–240 p-issn 2655-8564, e-issn 2685-9432 228 1. closure: , we have . 2. associativity: , we have . 3. identity: there is an element satisfying for all . 4. inverse: for all , there is an element satisfying . (where is as in the identity law) definition 1.10. [20] a finite group is a group containing finite number of elements. the order of a finite group is the number of elements in . definition 1.11. [21] a subset of a group is called a subgroup if it forms a group in its own right with respect to the same operation on . definition 1.12. [21] let and be groups. a homomorphism from to is a map which preserves the group operation. definition 1.13. [21] a subgroup of is said to be a normal subgroup if it is the kernel of a homomorphism. equivalently, is a normal subgroup if its left and right cosets coincide: for all . we write ” is a normal subgroup of ” as , we write . if is a normal subgroup of , we denote the set of (left or right) cosets by . we define an operation on by the rule for all definition 1.14. [24] let and be elements of a group such that yields an element of and is defined by , the collections of arbitrary in forms the commutator subgroup of . lemma 1.15. [16] suppose and e is positive integer. then 1. . 2. . 3. . 4. . lemma 1.16. [25] let be a group with subgroup of and subgroup of set and the following holds 1. , and if is abelian, then 2. if is abelian, then the minimal number of elements needed to generate and , then . international journal of applied sciences and smart technologies volume 3, issue 2, pages 225–240 p-issn 2655-8564, e-issn 2685-9432 229 3. if is abelian and the minimal number of elements needed to generate , there exists such that . 4. if is cyclic, then is cyclic lemma 1.17. [25] suppose 〈 〉 is a finite group with ′ abelian and of order . if is the commutator subgroup of , then { } theorem 1.18. [25] let be group with abelian and 1. there exists with in particular, if , then can be taken to be any element of . 2. let . if either then 3. is equal to the commutator subgroup of . lemma 1.19. [25] let be a subgroup of , element of and . if is abelian, then . theorem 1.20. [26] let be a group and let be two subgroups of and define , then 1. if both and are normal in , then is also a normal subgroup of . 2. if alone is normal in , then is a normal subgroup of . 3. if is normal in then and is a normal subgroup of . 4. if both and are normal in , then is a normal subgroup of . theorem 1.21. (sylow’s theorems) let be a group of order , where is a prime, , and does not divide . then: 1. that is subgroups exist. 2. all subgroups are conjugate in . 3. any subgroup of is contained in a subgroup. 4. . lemma 1.22. (lagrange’s theorem) [22] let be a subgroup of a finite group . then the order of divides the order of . international journal of applied sciences and smart technologies volume 3, issue 2, pages 225–240 p-issn 2655-8564, e-issn 2685-9432 230 2 research methodology this article is not a variable base research, however, well known algebraic definitions and results were used to investigate the algebraic and combinatorial properties of the subgroup graph of finite groups. 3 results and discussion the subgroup graphs of finite groups is introduce in this section. we begin with the definition and notion of the subgroups graph of a finite group. definition 3.1. let be a finite group and be the set of subgroups of . then the subgroup graph of is the graph with vertex set and the edge set {{ } } remark 3.2. let be a finite group of order , some clear consequences of the definition of subgroup graphs of finite groups are 1. the subgroup graph is a simple graphs, thus, there are no loops nor multiple edges. 2. the trivial subgroups of are adjacent to every other vertices on 3. since the trivial subgroups of are adjacent to every other vertices on then the graph is connected. 4. the has a diameter and radius of if . below, we give an example of subgroups graph. example 3.3. let the group of integer modulo 6 under addition then the following is the undirected subgroups graph of see figure 1. figure 1. subgroups graph of . international journal of applied sciences and smart technologies volume 3, issue 2, pages 225–240 p-issn 2655-8564, e-issn 2685-9432 231 theorem 3.4. [20] a simple graph is bipartite if and only if it does not have any odd cycle. remark 3.5. let be a finite group, the subgroup graph of is never bipartite. theorem 3.6. let be a finite group of prime order and let be the set of subgroups of , then the subgroups graph of is a straight line with only two vertices. proof: let be a finite group of prime order and let be the set of subgroups of , then by lemma 1.22, the order of every divides the order of but the order of is a prime which can only be divided by itself of 1. thus the only subgroups of this group are {{ } }; and they are adjacent. remark 3.7. let be a finite group and let be the subgroups graph of . then the vertex set and edge set therefore, the subgroups graph of a finite group can never be empty. theorem 3.8. let be a group and let , be two subgroups of and let be the subgroup graph of . define { }, then 1. if both and are normal in , then is also a vertex on . 2. if alone is normal in , then is a adjacent to vertex on . 3. if is normal in , then and is also a vertex on . 4. if both and are normal in , then is also a vetex on . proof: from theorem 1.20, the results follows. theorem 3.9. let be a finite group of non prime order , then the subgroup graph of is never a star graph. proof: suppose on the contrary, let the subgroup graph of a finite group of non prime order be a star graph; then it implies that all other are subgroup to only an arbitrary subgroup , but by remark 3.2(2), every group has two trivial subgroups which are adjacent to all other . so, the graph can not be a star graph, since, there is more than one vertex that is adjacent to all the vertices of . international journal of applied sciences and smart technologies volume 3, issue 2, pages 225–240 p-issn 2655-8564, e-issn 2685-9432 232 theorem 3.10. let and be two isomorphic finite groups. then the subgroup graph of is isomorphic to subgroup graph of . proof: suppose and are two isomorphic finite groups, then from lemma 1.8, . thus, . theorem 3.11. let be a group of order , where is a prime, and does not divide . suppose is the subgroup graph of , and are the vertex and edge sets of respectively. then: 1. there must exist sylow -subgroups of which are conjugate in and are adjacent to each other on the subgroup graph of . 2. there is a vertex on the subgroup graph of that is adjacent to a vertex that is a sylow -subgroup of . proof: 1. to show that there must exist sylow -subgroups of which are conjugate in and are adjacent to each other on the subgroup graph of , then it suffices if we can show to be conjugate and subsequently establish an edge between and . from theorem 1.21, it is established that the group has some sylow -subgroups. so, let be a sylow -subgroup of , and let be the set of all distinct conjugates of . suppose is the order , we need to establish that cannot divide . since each of the is a conjugate to , it implies every element of is in the orbit of . so using the formula for orbit size (where is the normalizer of ). however, lagrange’s theorem established that and clearly, is a subgroup of and it contains as a factor and a maximum power can assume. thus, and contains no factor of and so does not divide . 2. suppose is any -subgroup of , it will suffice if we can show . let act on { }, by conjugation. clearly, the the orbits of this action will partition . suppose the distinct orbits are the orbits of { } then the orbit of . to compute the orbit for international journal of applied sciences and smart technologies volume 3, issue 2, pages 225–240 p-issn 2655-8564, e-issn 2685-9432 233 any , , since is the stabilizer of under the action of . then the size of each orbit has to divide , which is a power of . though doesn’t divide so there is no dividing all the terms and else would divide their sum and also . assume which means and since is a −subgroup . this implies so every element of is also in then , therefore, the −subgroup is a subgroup of the arbitrary sylow −subgroup of . so, and are adjacent of the subgroup graph of . theorem 3.12. let and be two finite groups and be a group homomorphism. suppose there is an , a normal subgroup of and an , a normal subgroup of such that is adjacent to on the subgroup graph of and is adjacent to on the subgroup graph of . then 1. is adjacent to on the subgroup graph of . 2. is adjacent to on the subgroup graph of . proof: suppose and are two finite groups and is a group homomorphism, if there is an , a normal subgroup of and an , a normal subgroup of such that is adjacent to on and is adjacent to on then to show that is also adjacent to on and is adjacent to on ; it will suffice if we can show to be a normal subgroup of and to be a normal subgroup of respectively. 1. let , since is a group homomorphism and is normal in . thus, is normal in 2. let a be an arbitrary element of , then the set satisfies that since is normal in . thus, for every . this shows that is a normal subgroup of . international journal of applied sciences and smart technologies volume 3, issue 2, pages 225–240 p-issn 2655-8564, e-issn 2685-9432 234 corollary 3.13. [27],[28] the alternating group is a subgroup of the symmetric group . theorem 3.14. let , and be the subgroups graphs of symmetric groups, , dihedral groups and the alternating groups , . suppose and are vertices on and respectively, then both and are also vertices on and are adjacent to if and only if is adjacent to on and is adjacent to on proof: let , and be the subgroups graphs of symmetric groups, , dihedral groups and the alternating groups , . to show that and which are vertices on and respectively are also vertices on and are adjacent to if and only if is adjacent to on and is adjacent to on , then it suffices, if we can show both and to be subgroups of . assume that is a subgroup of and is a subgroup of . then observe the structure of the group of symmetries of a regular −gon in a plane (dihedral group ( )), it is isomorphic to a subgroup of then it is a proper subgroup of . also, by corollary 3.13, is a subgroup of . moreover, since and by implication, they are also subgroups of and hence are vertices on conversely, assume that is adjacent to on and is adjacent to on , then by definition 3.1, and . but both and are subgroups of , also, by implication, they are adjacent to . theorem 3.15. [29] if , then the number of subgroups of the dihederal group is . where is the number of divisors of and is the sum of divisors of . remark 3.16. let be a dihedral group of order , then the number of vertices on the subgroups graphs of the dihederal group is , where is the number of divisors of and is the sum of divisors of . international journal of applied sciences and smart technologies volume 3, issue 2, pages 225–240 p-issn 2655-8564, e-issn 2685-9432 235 theorem 3.17. let be a commutator subgroup of a finite group of order , suppose there exist a normal subgroup of such that is abelian, then and are adjacent on the subgroup graphs of . proof: let be a finite group of order and the commutator subgroup of . if there exist a normal subgroup of such that is abelian, then to show that and are adjacent on (the subgroup graph of ), we must show that either or . note that is normal in and is abelian, then for , we have and using the definition of coset multiplication . which implies , where . similarly, and since and are arbitrary then any commutator in is an element of and since is a subgroup of then any finite product of commutators in is an element of and thus . theorem 3.18. let be a commutator subgroup of a finite group of order and let be a subgroup of . if there exist an such that , then and are adjacent on the subgroup graphs of . proof: by lemma 1.15(2) and using the method of [25], the map sending to is a homomorphism. thus, the image map of is the subgroup theorem 3.19. (schur zassenhaus) [16] let be a finite group and write where . if has a normal subgroup of order then it has a subgroup of order . remark 3.20. let be a finite group and write where . then the vertex set of the subgroup graph of contains at least two vertices which orders are a and b respectively. international journal of applied sciences and smart technologies volume 3, issue 2, pages 225–240 p-issn 2655-8564, e-issn 2685-9432 236 theorem 3.21. let be a finite nilpotent group, such that is an abelian p-group with the minimal number of elements needed to generate , then is a vertex on the subgroup graph and it is adjacent to . proof: suppose is a finite nilpotent group, such that is an abelian p-group with , then to show that is a vertex adjacent to on the subgroup graph of , it will suffice if we can show to be equal to , the commutator subgroup of . now, since is finite, we assume an arbitrary such that and obviously, is normal in and by theorem 3.19 (schur zassenhaus theorem) and the methods in [25], where { }. also, by lemma 1.16(3), and lemma 1.17, we set and if there exist such that and { } . thus, we can assume , and they also exists with since is nilpotent, by theorem 1.18 and lemma 1.19, (⋃ ) ⋃ lemma 3.22. let and be two non nilpotent finite groups, such that there is an isomorphism of and , if the commutator subgroup of is adjacent to on the subgroups graph of , then the commutator subgroup of is also adjacent to on the subgroups graph of . proof: suppose and are non nilpotent finite groups such that there is an isomorphic map between and , then we can safely say there is also isomorphic map between the commutator subgroups of and , which shows the isomorphic relationship between and . also, since the commutator subgroup of is adjacent to on then the commutator subgroup of is also adjacent to on . example 3.23. let be a quaternion group generated by the following matrices ( ) ( ) ( ) ( ) international journal of applied sciences and smart technologies volume 3, issue 2, pages 225–240 p-issn 2655-8564, e-issn 2685-9432 237 [30], using the matrix multiplication obtained { }, observe that the subgroups of consists of itself and of the cyclic subgroups 〈 〉 { } 〈 〉 { } 〈 〉 { } 〈 〉 { } 〈 〉 { } and the following is the subgroups graph of . see figure 2. figure 2. subgroups graph of q8. 4 conclusion this study has highlighted some algebraic properties and combinatorial structures of subgroups graph of finite groups. the connections between the subgroups graphs of finite groups upto homomorphism and isomorphism were also studied and further looked at the relationships between the subgroups graphs of symmetric groups, , dihedral groups and the alternating groups an, when . acknowledgements the authors wish to thank the reviewers for their helpful comments and recommendations on this article. references [1] i. kleiner, “history of group theory,” history of abstract algebra, birkhauser boston, 17–39, 2007. [2] j. zhang, f. xiong and j. kang, “the application of group theory in communication operation pipeline system,” mathematical problems in engineering, 2018. international journal of applied sciences and smart technologies volume 3, issue 2, pages 225–240 p-issn 2655-8564, e-issn 2685-9432 238 [3] j. laane and e. j. ocola, “application of symmetry and group theory for the investigation of molecular vibrations,” acta applicandae mathematicae, 118(1), 3–24, 2012. [4] e. a. rietman, r. l. karp and j. a. tuszynski, “review and application of group theory to molecular system biology,” theoretical biology and medical modeling, 8(21), 2011. [5] h. osborn, “symmetry relationships between crystal structures: application of crystallographic group theory in crystal chemistry,” contemporary physics, 6(1), 97–98, 2015. [6] a. cayley, “desiderata and suggestions: the theory of groups: graphical representation,” american journal of mathematics, 1 (2), 403–405, 1878. [7] w. b. vasantha kandasamy and f. samarandache, “groups as graphs,” editura cuart and authors, 2009. [8] p. h. zieschang, “cayley graph of finite groups,” journal of algebra, 118, 447– 454, 1988. [9] p. j. cameron and s. ghosh, “the power graph of finite groups,” discrete mathematics, 311, 1220–1222, 2011. [10] s. u. rehman, a. q. baig, m. imran and z. u. khan, “order divisor graphs of finite groups,” an. st. ovidus constanta, 26(3), 29–40, 2018. [11] j. s. williams, “prime graph components of finite groups,” journal of algebra, 69(2), 487–513, 1981. [12] x. l. ma, h. q. wei and g. zhong, “the cyclic graph of a finite groups,” algebra, 2013. [13] a. erfanian and b. tolue, “conjugate graphs of finite groups,” discrete mathematics, algorithm and applications, 4(2), 2012. [14] b. akbari, “hall graph of a finite group,” note mat., 39(2), 25–37, 2019. [15] a. lucchini, “the independence graph of a finite group,” monatsheft fur mathematik, 193, 845–856, 2020. [16] d. gorenstein, “finite groups,” harper & row, new york, 1968. [17] d. j. s. robinson, “a course in the theory of groups,” 2nd edition, springerverlag, new york, 1996. international journal of applied sciences and smart technologies volume 3, issue 2, pages 225–240 p-issn 2655-8564, e-issn 2685-9432 239 [18] s. d. david and m. f. richard, “abstract algebra, 3rd edition,” john wiley and son inc., 2004. [19] c. godsil and g. boyle, “algebraic graph theory, 5th edition,” springer, boston new york, 2001. [20] a. gupta, “discrete mathematics,” s.k. kataria & sons, 258–310, 2008. [21] p. j. cameron, notes on finite group theory, www.maths.qmul.ac.uk, 2013. [22] h. e. rose, “a course on finite groups,” springer science & business media, 2009. [23] a. e. clement, s. majewicz and m. zyman, “introduction to nilpotent groups,” the theory of nilpotent group birkhauser, cham, (2017). [24] w. a. trybulec, “commutator and center of a group, formalized mathematics,” universite catholique de louvain, 2(4), 1991. [25] r. m. guralnick, commutators and commutator subgroups, advances in mathematics, 45, 319–330, 1982. [26] people.math.binghamton.edu (accessed on 2nd october, 2020). [27] www.math.columbia.edu (accessed 6th november, 2020). [28] k. conrad simplicity of an; kconrad.math.uconn.edu (accessed on 7th november, 2020). [29] s. r. cavior, “the subgroups of the dihedral groups,” mathematics magazine, 48, 107, 1975. [30] m. tarnauceanu, “a characterization of the quaternion group,” an. st. univ. ovidius constanta, 21(1), 209–214, 2013. international journal of applied sciences and smart technologies volume 3, issue 2, pages 225–240 p-issn 2655-8564, e-issn 2685-9432 240 this page intentionally left blank international journal of applied sciences and smart technologies international journal of applied sciences and smart technologies volume 3, issue 2, pages 185–202 p-issn 2655-8564, e-issn 2685-9432 185 the simulation of traffic signal preemption using gps and dijkstra algorithm for emergency fire handling at makassar city fire service m. friaswanto1, e. a. lisangan1,*, s. c. sumarta1 1department of information technology, atma jaya university of makassar, indonesia *corresponding author: erick_lisangan@lecturer.uajm.ac.id (received 04-11-2021; revised 28-12-2021; accepted 28-12-2021) abstract the makassar city fire department often faces obstacles in handling fires. problems that often hinder such as congestion at crossroads, panic residents, and others. the result of this research is a system that can assist firefighters when handling fire cases in terms of accelerating the firefighting team to the location of the fire. dijkstra's algorithm will be used to find the shortest path to the fire location and the travel time. then the traffic signal preemption simulation adjusts the color of the lights when the gps vehicle approaches the traffic lights on the path to be traversed. the simulation results show that the use of traffic signal preemption in collaboration with dijkstra's algorithm and gps can help the performance of the makassar city fire department, especially for handling fires that require fast time. keywords: fire service, traffic signal preemption, gps simulation, dijkstra algorithm international journal of applied sciences and smart technologies volume 3, issue 2, pages 185–202 p-issn 2655-8564, e-issn 2685-9432 186 1 introduction based on report data obtained from the makassar city fire service, in the makassar city area in 2018 there were 209 incidents spread across 14 sub-districts. where from 209 fire cases that occurred throughout 2018 in the city of makassar there were losses estimated at around rp. 22,040,000,000 and 10 people died due to fire and 7 people were injured. realizing the dangers of fire, a fire department has been established in every region of indonesia, including makassar city, in order to prevent and overcome fires that can occur at any time. when a fire occurs, firefighters must always be ready to handle and extinguish the fire. but usually to deal with fires before the firefighters arrive, the community usually works together to extinguish the fire manually while helping the victim. but obviously very difficult if the fire has grown and the wind is strong. therefore the presence of a fire extinguisher is very necessary. based on the results of the author's interview with one of the makassar city fire department officers, there are several problems that can interfere with or hinder the performance of officers. the problems commonly faced by the makassar city fire department when dealing with fires are road congestion and also at traffic light intersections, the fastest route to the fire location, information that is slow to receive, residents and journalists covering which hinder the work of officers, citizens who are always willing use extinguishing equipment to help but do not even know the function of the tool, and so on. these problems can certainly cause harm to the victim. this makes every minute very valuable in fire fighting. the problem of congestion and traffic lights are also things that can interfere with the performance of the makassar city fire department. whereas based on the regulation of the minister of public works no. 20 of 2009, the emergency response time for fires in indonesia should not be more than 15 minutes after receiving notification of a fire in a location 7.5 km from the nearest fire station. for the makassar city area, there are 7 fire stations scattered in several places in the makassar city area. the problem of congestion and traffic conditions at crossroads with traffic lights were also complained of by firefighters interviewed by researchers. when the fire engine is on a road that is in a traffic jam, the fire engine is forced to reduce the speed of the vehicle and make the international journal of applied sciences and smart technologies volume 3, issue 2, pages 185–202 p-issn 2655-8564, e-issn 2685-9432 187 firefighters late to the fire location, resulting in delays in handling fires and can result in more losses. even though they have the right to break through traffic lights or take the opposite lane, firefighters are also often hampered at intersections if the traffic light is still red and cannot move to the opposite lane. for the problem of finding a route for fire trucks to get to the fire location so far at the makassar city fire department, it is still based on the knowledge of the team leader and other officers in charge of extinguishing the fire, and when there is an obstacle on the road, the team leader must be able to find alternative routes for his team to fire location. one of the algorithms that can be used to find the shortest route is dijkstra's algorithm. dijkstra's algorithm is an algorithm invented by edsger w. dijkstra and published in 1959. this algorithm is used to solve the shortest path problem for a directed graph. this algorithm finds the shortest path from the starting point to the end point based on the smallest weight from one point to another. the way the dijkstra algorithm works uses a greedy strategy, where at each step the side with the smallest weight is selected that connects a node that has been selected with another node that has not been selected [1], [2]. the results of the dijkstra's algorithm will be used as directions for makassar city firefighters when heading to the fire location. to find the shortest path to the fire location, the dijkstra algorithm method is used where the weight values to be used are the distance of each node and the value of traffic density. several previous studies have examined the search for the shortest route as in [3], [4], [5], [6]. ratnasari et al (2013) concluded that dijkstra's algorithm can produce a simulation of the shortest path along with alternative paths and the travel time required for a vehicle to reach a certain location. iswanjono and wijaya (2015) designed an automatic system to regulate traffic lights at crossroads. this automatic system works after a vehicle that has a special priority to go through a red light sends gps coordinates via radio waves which will later be captured by a receiver mounted on a traffic light. aquarizky et al (2017) concluded that the floyd-warshall algorithm can be used to find the shortest route for firefighters, but it has not been integrated with congestion data and red light settings in real time. in addition, this algorithm is also not appropriate to be used in a wide area because the suggested results are not optimal. septifany et al (2017) compared finding the optimum route with the dijkstra's algorithm method and the a* international journal of applied sciences and smart technologies volume 3, issue 2, pages 185–202 p-issn 2655-8564, e-issn 2685-9432 188 algorithm. the results of this comparison state that there is no significant difference between the two. in addition, the suitability of the route obtained from the routing process using postgresql, is still less accurate than the route generated by google maps. due to incomplete road shape file data such as those owned by google maps and postgresql, it is not equipped with weighting for traffic directions. as a result, there are routes that cannot be used due to inappropriate road taking. in this study, the determination of the shortest route is only based on the distance between nodes and no weight is given to traffic conditions that will be traversed by firefighters. several developed countries have implemented traffic signal preemption / traffic signal priorization to regulate traffic lights so that vehicles that have priority to pass through the road will be given a road. if the vehicle that has priority will pass then the light will be green and the surroundings will be red. this can be used to allow vehicles handling emergency vehicles to pass through the road by turning the light green. this light change can help emergency vehicles to arrive at the location faster and increase safety when heading to the fire location [7]. one method that can be used to implement traffic signal preemption is to use the global positioning system or gps [8]. the global positioning system (gps) is a radio navigation system using 24 satellites orbiting the earth in 6 circular orbits. where all units can transmit signals to earth and later will be captured by the signal receiver. this gps system was originally developed by the us department of defense in the early 80s [9]. meanwhile, to design traffic signal preemption / traffic signal priorization, it is simply made using wemos d1. in this simulation, the gps simulation is set using javascript to coordinate the vehicle that will go to the location of the fire. and when the fire department vehicle starts to go to the fire location, the simulation of the gps coordinates of the fire squad vehicle will start to be compared with the coordinates of the nearest traffic light on the path that the fire fighting vehicle will take when heading to the fire location. the nearest traffic light will turn green on the path to be traversed in order to give priority to firefighter vehicles so that every vehicle on the path to be passed by the firefighters will run so that there is no accumulation of vehicles on the path that will be traversed by fire vehicles to the location of the fire. when the traffic international journal of applied sciences and smart technologies volume 3, issue 2, pages 185–202 p-issn 2655-8564, e-issn 2685-9432 189 light has been passed, the traffic light will return to its normal status, while the next nearest traffic light will be a priority until the fire engine passes. 2 research methodology research design the design method used is the waterfall method (figure 1) which consists of analysis, design, writing, testing, and implementation and maintenance. the data collection method used by the author in this research is to use literature studies and interviews. the literature study used in the study to collect data on traffic signal preemption, wemos and the ddijkstra algorithm obtained from various reference sources, books, offline and online journals. interviews were used to obtain information by asking questions directly to the resource persons, namely the makassar city fire department. figure 1. waterfall’s method [10] the author has conducted an interview process with the makassar city fire department to find out the problems faced by firefighters when carrying out their duties to extinguish fires. from the interviewer, it was concluded that the makassar city fire department currently requires a system that is able to display the path to the location of the fire from the pemadam headquarters, as well as a traffic light control system to assist fire trucks to the location of the fire. international journal of applied sciences and smart technologies volume 3, issue 2, pages 185–202 p-issn 2655-8564, e-issn 2685-9432 190 traffic light simulation circuit the traffic signal preemption simulation requires a digital traffic light circuit. the traffic light simulation circuit can be seen in figure 2. in figure 2 it can be seen that the circuit consists of several components, namely wemos, traffic light module, and i2c lcd. the wemos board is used as a microcontroller whose job is to control the lights that will be turned on at traffic lights based on the information received from the server. communication between wemos and the server uses the internet which is connected via the esp8266 wifi module on wemos. in addition, the i2c lcd is an information display on the traffic light to provide information to motorists that the fire engine will pass through the area around the traffic light so that the driver can pull over to make room for the fire engine. figure 2. traffic light simulation circuit 3 results and discussion traffic signal preemption algorithm design in the calculation of the dijkstra algorithm, it is necessary to have a weight for each related node, in this study the weight is taken from the distance of each node in meters and the value of traffic density where the value is randomly generated assuming a value range of 0 to 100. the limit value is 100. selected with the assumption that a road segment can accommodate a maximum of 100 vehicles at one time, so that when the density value is above 75 it will be categorized as a moderately dense vehicle. the distance of each node along with the value of traffic density will be summed and will be international journal of applied sciences and smart technologies volume 3, issue 2, pages 185–202 p-issn 2655-8564, e-issn 2685-9432 191 the main weight used in dijkstra's algorithm. the results of the algorithm in the form of path, distance, and time of the vehicle will be displayed on the map. based on the distance obtained, the travel time will also be calculated with a standard speed of 40 km/hour or 666.67 meters/minute. for traffic light settings, the coordinates of the vehicle will be calculated the distance to the traffic light on the path to be traversed. the traffic light that will be traversed and closest to the vehicle will turn green, and when the vehicle has passed the traffic light, the light will return to normal. then if the fire has been handled, a fire case report will be included. the flowchart of the traffic signal preemption system simulation can be seen in figure 3. figure 3. flowchart of traffic signal preemption system simulation system workflow figure 4 illustrates the workflow of the system mechanism for the fire brigade. fire location information in the form of distance and travel time to the location will be received by the fire brigade. furthermore, when the firefighters will leave for the location of the fire, the leader of the firefighting team will press the "depart" button when the vehicle will go to the location of the fire. and when the vehicle moves, the coordinates will be sent to the system to calculate the distance from the vehicle to the nearest traffic light on a predetermined path. when a nearby light is found it will be set to green to give priority to the fire department. when the firefighters international journal of applied sciences and smart technologies volume 3, issue 2, pages 185–202 p-issn 2655-8564, e-issn 2685-9432 192 have arrived at the location, the team leader can turn off the map and when the fire is finished, the fire team leader can send information that the fire has been handled. figure 4. fire squad system workflow graph representation to form a graph of a real road, the latitude and longitude coordinates of each road is needed along with the distance of each road. to form a graph from geolocation data, the nodes/points of each location must first be determined. after that the nodes will be connected. a node can be connected to several nodes at once, it will form a node which will later form a graph. in this study, researchers used 103 nodes to connect 45 road points in the makassar city area adjacent to the fire station on jalan ratulangi with a total of 190 nodes (figure 5). figure 5. graph representation of several makassar city roads international journal of applied sciences and smart technologies volume 3, issue 2, pages 185–202 p-issn 2655-8564, e-issn 2685-9432 193 the implementation of dijkstra’s algorithm the process of dijkstra's algorithm in the system is carried out by determining the starting point and ending point to be processed. the starting point/departure point used is the makassar city fire station, namely at coordinates -5.149009661835451, 119.4167173586253 or at the fire department on ratulangi road, while the end point is the coordinates of the road where a fire will occur. for the weight on a road is obtained from the length or distance of each interconnected node and from the value of traffic density where the value ranges from 0 to 100 where this value is assumed to be the number of vehicles at a road point. for selected waypoints to be given a certain weight at 16:00-16:30. meanwhile, at times other than these hours, the value of traffic density will be randomly generated at each road point so that it can generate several possible paths. when the value of traffic density on a road is higher, this value will affect the process of finding the shortest path. from the departure point, each neighbor point that has not been passed will be considered and then the weight will be calculated. for example, suppose the distance from point a to point b is smaller than the distance from point a to c then the data a to b will be stored. if then there is a smaller distance then the old data will be deleted and replaced with new data. each node/point that has been passed will not be calculated again for that point. then the point that has not been traversed with the smallest distance (from the departure node) will be set as a new departure node. determination of the shortest path calculated using the dijkstra algorithm will produce the shortest path along with the travel time that can be traversed by fire fighting vehicles when heading to the fire location. then for the traffic lights in the city of makassar, most of them are still analog or manual where at the traffic lights a timer is installed to change the color of the lights in under 1 minute. the lights are unresponsive and cannot adapt to crossing conditions. the use of a microcontroller can be used to change the status of the lights when the vehicle is on the vehicle path to be traversed. for the change of lights, when the firefighters enter the map display page, the gps simulation will start running and will start sending the coordinates of the traffic lights on the path that has been obtained in the results of the dijkstra algorithm which will be a priority for the firefighters who will international journal of applied sciences and smart technologies volume 3, issue 2, pages 185–202 p-issn 2655-8564, e-issn 2685-9432 194 be tasked with extinguishing the fire. . then when the fire engine has passed the traffic light coordinates, the lights will automatically return to normal status. in this simulation, the search for the shortest path and traffic light settings is carried out when the fire engine is heading to the location of the fire. the location point that will be used is on jalan panampu, where in the results of the interview there is information that there has been a delay in getting to the location of the fire. the delay limit used by the author is based on the regulation of the minister of public works no. 20 of 2009, where the emergency response time for fires in indonesia should not be more than 15 minutes after receiving notification of a fire in a location 7.5 km from the nearest fire station. to search for the shortest path, there are two simulations, namely simulations around 16:00-16:30 and outside these times (figure 6). in the hours outside 16:00-16:30 there are several possible paths that can be passed. however, a route with the shortest distance will be sought from the pemadam headquarters to jalan panampu along with the travel time to the location of the fire at a standard speed of 40 km/hour or 666.67 meters/minute. figure 6. route search results at 16:00-16:30 from the results of the path search by the dijkstra algorithm at 16:00-16:30, the distance to the fire location on jalan panampu is 5068 meters with an estimated path and travel time of 7 minutes to get to the location of the fire on jalan panampu and international journal of applied sciences and smart technologies volume 3, issue 2, pages 185–202 p-issn 2655-8564, e-issn 2685-9432 195 depart from ratulangi street fire department headquarters. the path found by dijkstra's algorithm is as follows: mabes pemadam → jln ratulangi (depan wisma kalla) → persimpangan jln jend. sudirman x jln sungai saddang lama → persimpangan jln jend. sudirman x jln kartini → jln ahmad yani (sekitar mtc) → jln jend. m. jusuf → jln andalas → persimpangan jln ujung x jln yos sudarso → persimpangan jln yos sudarso x jln cakalang → jln cakalang → persimpangan jln cakalang x jln panampu → jln panampu. from the results of the path search by the dijkstra algorithm, it was found that there were two possible distance paths to the location of the fire on jalan panampu, the first was as far as 5068 meters with an estimated path and travel time of 7 minutes to get to the location of the fire which was on jalan panampu and departed from the road fire department headquarters. ratulangi (figure 7). the path found by dijkstra's algorithm is as follows: mabes pemadam → jln ratulangi (depan wisma kalla) → persimpangan jln jend. sudirman x jln sungai saddang lama → persimpangan jln jend. sudirman x jln kartini → jln ahmad yani (sekitar mtc) → jln jend. m. jusuf → jln andalas → persimpangan jln ujung x jln yos sudarso → persimpangan jln yos sudarso x jln cakalang → jln cakalang → persimpangan jln cakalang x jln panampu → jln panampu. figure 7. search results for the first path to jalan panampu the second route is 5287 meters with an estimated route and travel time of 7.94 minutes to get to the fire location on jalan panampu and depart from the fire department headquarters jalan ratulangi (figure 8). the path found by dijkstra's algorithm is as follows: international journal of applied sciences and smart technologies volume 3, issue 2, pages 185–202 p-issn 2655-8564, e-issn 2685-9432 196 mabes pemadam → jln ratulangi (depan wisma kalla) → persimpangan jln jend. sudirman x jln sungai saddang lama → persimpangan jln jend. sudirman x jln kartini → jln ahmad yani (sekitar mtc) → jln jend. m. jusuf → persimpangan jln jend. m. jusuf x jln veteran utara → jln bandang → persimpangan jln bandang x jln ujung → persimpangan jln ujung x jln yos sudarso → persimpangan jln yos sudarso x jln cakalang → jln cakalang → persimpangan jln cakalang x jln panampu → jln panampu figure 8. search results for the second path to jalan panampu the difference in the results of this path search is due to the different weights obtained in each lane that allows fire vehicles to pass when heading to the location of the fire on jalan panampu. the weight in question is the distance of each connected node from the starting point, namely the pemadam headquarters to the location of the fire on jalan panampu and also the value of traffic density which is assumed to be in the range 0-100. so that when there is a determination of a different path then on another path there is a higher traffic density value than the other lane or it can be said that on another path there is congestion so the dijkstra algorithm will choose a path with a low density value with a low distance also. user interface the user interface for the firefighters can be seen in figure 9. if there is a fire report there will be a warning. to enter the system, the fire squad leader can directly enter to view the map and the path that has been calculated by the dijkstra algorithm along with the travel time. in addition, when the firefighters enter the map page, the status of the nearest traffic light on the vehicle on the path to be traversed will be changed to green. when the firefighters get a fire report and start leaving for the fire location, the international journal of applied sciences and smart technologies volume 3, issue 2, pages 185–202 p-issn 2655-8564, e-issn 2685-9432 197 firefighters leader can press the "go" button to start the process of calculating the estimated travel time needed for vehicles to pass traffic lights that are on the path to be traversed. in addition, the path display on the map will be displayed based on the calculation results of the dijkstra algorithm. for lighting settings, the vehicle coordinates in this study use coordinate simulation in javascript. in this study, traffic lights are located at two points, namely point a which is at the intersection of jend. sudirman with saddang lama river road or at coordinates -5.14528510434342, 119.41523463271892. and point b which is at the intersection of jend. sudirman by amampanga street or at coordinates -5.13698889347666, 119.41410191337218. when the firefighter's vehicle starts walking towards the location of the fire, the gps simulation of the vehicle will begin to be sent and the vehicle's coordinates will begin to be calculated the distance to the traffic light that is in the path that will be passed by the firefighter's vehicle to the location of the fire. when the coordinates of the vehicle have passed the coordinates of the nearest light, the light will return to its normal status and the next closest light will turn green to give priority to the fire engine when it comes to the fire location. when the fire department vehicle has arrived at the fire location, the team leader can press the arrive at location button to indicate that the firefighters have arrived at the fire scene. when the firefighters have finished extinguishing the fire, the team leader can press the handled button to notify the system that the fire has been controlled and the fire status in the database is handled. figure 9. user interface for firefighters squad international journal of applied sciences and smart technologies volume 3, issue 2, pages 185–202 p-issn 2655-8564, e-issn 2685-9432 198 simulation testing in the results of the traffic signal preemption simulation design, the system is made to adjust the color of the traffic lights that will be passed by fire vehicles. traffic signal preemption simulation using wemos will capture the results of calculations between vehicle gps simulation data and traffic light point data on the path that will be traversed by the fire to the location of the fire. each wemos will be given coordinates according to the coordinates of the traffic lights, which in this case the traffic lights are located at two points, namely point a which is at the jend. sudirman with saddang lama river road or at coordinates -5.14528510434342, 119.41523463271892. and point b which is at the intersection of jend. sudirman by amampanga street or at coordinates 5.13698889347666, 119.41410191337218. wemos which will connect to the server will read the led.json file sent by the gps simulation file and contain the coordinates of the lights which will be changed to the priority of the fire department. next wemos will read a json file containing the coordinates of the traffic lights that will be prioritized. the coordination of the lights in the data sent to wemos is the same as certain wemos coordinates, then wemos with the coordinates sent will change the lights from a normal state to a priority for firefighting vehicles so that vehicles on the path to be traversed by fire vehicles will be able to run well. there are vehicles on the path that will be traversed by firefighters. see figure 10. figure 10. lamp and lcd on priority condition when the light status changes to priority, wemos d1 takes a fraction of a second to read the led.json file until it changes the color of the lane 1 light to green and the lane 2 international journal of applied sciences and smart technologies volume 3, issue 2, pages 185–202 p-issn 2655-8564, e-issn 2685-9432 199 light to red at the traffic lights at points a and b. in addition, the i2c lcd also displays the words notice that a fire engine is about to pass. when the coordinates of the nearest traffic light have been traversed by the fire vehicle heading to the location of the fire, the next light closest to the coordinates of the fire engine will be the priority and the previous light will return to its normal situation. and when the fire brigade has arrived at the scene of the fire, the last light will also return to its normal status. from the light change test, it can be seen that the traffic lights a and b can run well in manual mode, where the red, yellow and green lights run alternately. the traffic lights at points a and b can only change automatically when the vehicle simulation starts. at points a and b there are 2 lights each, each of which will change color where the first light at point a will turn green while light 2 will turn red, as well as the two lights at point b. when the vehicle approaches the nearest light, the lights will turn red. becomes a priority, for example, if the vehicle simulation approaches the point a light, one light will turn green while the second light will turn red. the light that turns green is the path that will be traversed by the fire fighting vehicle, when the gps simulation of the vehicle has passed the coordinates of the lights at point a, the lights at point a will return to their normal status. then the next closest light, namely the light at point b, will turn green on the path to be traversed and on the other side of the road the lights will turn red until the vehicle has passed the coordinates of the red light at point b. changes quickly because wemos depends on the speed of the connected internet to read json files on the server. implementation testing the author has carried out implementation tests carried out with program demonstration activities and also used the interview method to the head of the operational section of the makassar city fire department. the results of interviews and demonstration activities carried out with resource persons indicate that: 1. the functions in the application for fire admins have been running well according to needs, but it is hoped that there will be development so that it does not only include international journal of applied sciences and smart technologies volume 3, issue 2, pages 185–202 p-issn 2655-8564, e-issn 2685-9432 200 fire reports but several other types of reports such as animal disturbance reports, rescue of people or animals, and others. 2. the simulated traffic signal preemption simulation is considered to be very good and is considered to be able to help the firefighters because there is a path to get to the fire location and for traffic lights it can help because it can make fire engines run without interference at crossroads where there are traffic lights. and minimize the possibility of crossing accidents when trying to break through traffic lights. 3. if it is to be implemented, collaboration with several relevant agencies in the makassar city area is needed, namely the makassar city communications and information service office, the makassar city transportation service and south sulawesi province, and the makassar city health office. the conclusion that can be drawn from this implementation test is that the system created has been able to run well and is said to be able to help the performance of the makassar city fire department, especially for handling fires that require handling speed. in addition, it is hoped that there will be development of applications for input from the admin side so that they can not only input fire reports but several other types of reports. in addition, collaboration with several agencies will be needed so that it can be implemented properly. 4 conclusion traffic signal preemption simulation using gps and dijkstra's algorithm for emergency response to fire handling at the makassar city fire service which is designed to help the performance of the firefighters. where the traffic signal preemption simulation has been able to adjust the color of the traffic lights when the gps simulation of the fire vehicle approaches the traffic light coordinates and gives priority to the fire vehicle. meanwhile, the fire emergency application to find the shortest path to the fire location along with the estimated travel time using the dijsktra algorithm has been able to find the shortest path to the fire location. international journal of applied sciences and smart technologies volume 3, issue 2, pages 185–202 p-issn 2655-8564, e-issn 2685-9432 201 in this study, testing has not been carried out on the condition of changing lanes in real time. future research is expected to accommodate changes in real time paths and also use a more optimal route determination algorithm. references [1] ferdiansyah dan a. rizal, “penerapan algoritma dijkstra untuk menentukan rute terpendek pembacaan water meter induk pdam tirta kerta raharja kabupaten tangerang”. jurnal ticom, 2 (1), 51–57, 2013. [2] a. s. girsang, “algoritma dijkstra.” (online) (https://mti.binus.ac.id/2017/11/28/algoritma-dijkstra/, diakses 10 november 2018). [3] a. ratnasari, f. ardiani, dan f. nurvita, “penentuan jarak terpendek dan jarak terpendek alternatif menggunakan algoritma dijkstra serta estimasi waktu tempuh.” seminar nasional teknologi informasi dan komunikasi terapan, 29-34, 2013. [4] iswanjono dan g. i. wijaya, “automatization of traffic light for imergency vehicles.” jurnal ilmiah widya teknik. 14 (2), 49–56, 2015. [5] a. g. j. w. aquarizky, b. irawan, dan c. setianingsih, ” perancangan dan implementasi aplikasi pencarian rute optimal untuk pemadam kebakaran berbasis android menggunakan algoritma floyd-warshall.” e-proceeding of engineering, 4 (3), 3993–4000, 2017. [6] d. s. septifany, a. laila, dan m. awaluddin, “analisis optimalisasi rute pemadam kebakaran berdasarkan area cakupan pipa hidran di kota semarang.” jurnal geodesi undip, 6 (3), 28–36, 2016. [7] h. r. al-zoubi, b. a. mohammad, s. z. shatnawi, dan a. i. kalaf, “a simple and efficient traffic light preemption by emergency vehicles using cellular phone wireless control.” mathematical methods and techniques in engineering and environmental science, 167–170, 2011. [8] j. f. paniati, “traffic signal preemption for emergency vehicles traffic signal preemption for emergency vehicles a cross-cutting study a cross-cutting study,” first edition. its joint program office, 2006. international journal of applied sciences and smart technologies volume 3, issue 2, pages 185–202 p-issn 2655-8564, e-issn 2685-9432 202 [9] y. yuniati, m. ulvan, and m. azzarah, ”implementasi modul global positioning system (gps) pada sistem tracking bus rapid transit (brt) lampung menuju smart transportation.” jurnal sains, teknologi dan industri, 14 (2), 150–156, 2017. [10] kadir. a., “pengenalan sistem informasi,” penerbit andi, yogyakarta, 2003.