International Journal of Interactive Mobile Technologies (iJIM) – eISSN: 1865-7923 – Vol. 15, No. 19, 2021 Paper—A Development of Multi-Language Interactive Device using Artificial Intelligence… A Development of Multi-Language Interactive Device using Artificial Intelligence Technology for Visual Impairment Person https://doi.org/10.3991/ijim.v15i19.24139 Norharyati Harum(), Nur’aliah Izzati M. S. K, Nurul A. Emran, Noraswaliza Abdullah, Nurul Azma Zakaria, Erman Hamid, Syarulnaziah Anawar University Teknikal Malaysia Melaka, Durian Tunggal, Malaysia norharyati@utem.edu.my Abstract—The issue of lacking reference books in braille in most public building is crucial, especially public places like libraries, museum and others. The visual impairment or blind people is not getting the information like we normal vision do. Therefore, a multi languages reading device for visually impaired is built and designed to overcome the limitation of reference books in public places. Some research regarding current product available is done to develop a better reading device. This reading device is an improvement from previous project which only focuses on single language which is not suitable for public places. This reading device will take a picture of the book using 5MP Pi camera, Google Vision API will extract the text, and Google Translation API will detect the language and translated to desired language based on push buttons input by user. Google Text-to-Speech will convert the text to speech and the device will read out aloud in through audio output like speaker or headphones. A few testings have been made to test the functionality and accuracy of the reading device. The testings are functionality, performance test and usability test. The reading device passed most of the testing and get a score of 91.7/100 which is an excellent (A) rating. Keywords—book reader, image-text conversion, text-to-speech, google cloud, artificial intelligence, visual impairment people 1 Introduction Reading is one of the most important skills and it gives many advantages for us. People can improve their knowledge and information about a particular field from books, magazine, newspaper and others. Based on the report of the United Nations Educational, Scientific and Cultural Organization (UNESCO), Malaysia’s literacy rate stands at 94.64 percent, which is very high in line with the measurement level set by the Unesco Statistics Institute. It is important that everyone get the access to the informa- tion. Malaysia is a multi-ethnic and multicultural country. According to [1], the official language of Malaysia is Malay language but other languages like English, Tamil and Mandarin are also used. Braille is a language used by majority visually impaired to read and write using tactile approach for communication and education purposes. Tactile is using sense by iJIM ‒ Vol. 15, No. 19, 2021 79 https://doi.org/10.3991/ijim.v15i19.24139 mailto:norharyati@utem.edu.my Paper—A Development of Multi-Language Interactive Device using Artificial Intelligence… touching or rubbing the surface of corresponding output device. Braille system was invented by a genius blind man, named Louis Braille. The Braille reading skill seems to be an essential skill for blind person, where 74% of blind people is reported unem- ployed because of poor academic achievement, since most of the related academic institutions use braille in the system [2]. From [3], it is reported that 95% of blind chil- dren do not attend school due to lacked of skilled teachers and limited access to Braille material and equipment. In report made in [4], KL Braille Resources acknowledge there is a lack of reference books in Braille. This is due to the shortage of experts who can convert conventional texts to Braille. It can be a time-consuming and labour intensive task as well. The lack of braille writing also makes it hard for visually impaired people to access information and news inside and out of country [5],[6]. Nowadays, many people with visually impairment use variety of assistive technol- ogy to help them become more functional. The assistive technology tools offers peo- ple with disabilities the perfect opportunity to develop their social interaction skills, visual competence, independent living skills, career education skills, and orientation and mobility among others. One of the current products that could assist them is the smart book reader published in [7]. The developed book reader will help the visual impairment people to read book without using braille. By using the book reader, they can access all information inside the normal using embedded speaker. It helps the visual impairment person to get knowledge from a normal book as easy as the normal person do. The book reader in [7] uses image-to text and text-to-audio conversion technology inside the (Optical Character Recognition) OCR software. However, the book reader only read books in English language. This aim of this project to build a multi-language book reader that can overcome the limitation of reference books in braille. The ability of the developed prototype will ben- efit blind person from various spoken language, thus will help them to access knowl- edge from normal book. This prototype can be also used in public area such library and museum. This reading device will help braille readers and visually impaired people to read without another person help. This reading device applies conversational artificial technology from three main module in Google Cloud, which are image-to-text, trans- lation, and text-to-speech. The reading device will capture an image of the text using a camera pi, which is connected to raspberry pi 3 model B+. The captured image will be processed using Google Cloud Vision API, using image-to-text module. Then, the text will be translated to the desired languages using Google Cloud Translation API, before using Google Cloud Text-to-Speech API conversion. The converted speech is delivered to the blind and visual impairment, using speaker or headphone. The project results will produce a product which is the reading device. The camera pi from reading device which is connected to Raspberry Pi 3 Model B+ will capture an image of the text. It uses Google Cloud Vision API to convert image to text and translate it using Google Cloud Translation API to desired languages, then it will read out the text by using Google Cloud Text-to-Speech API conversion. It supports audio via speaker and headphones. This helps to build a low cost reading device made for visually impaired. 2 Literature review Based on [8], the Digital Daisy Book Reader is a smartphone application based on Android that uses an audio representation of a print publication that is designed to further 80 http://www.i-jim.org Paper—A Development of Multi-Language Interactive Device using Artificial Intelligence… empower people with impairments. Blind user can move through the device using voice commands & predefined gestures to use daisy audio books to gain information. It has all the advantages of standard audio books but they are better when it adds up to content control and synchronized text display. This also offers enhanced visual access to the content for blind or otherwise visually impaired people, allowing them easy access to different sections of the script. Through using the Digital Accessible Information Sys- tem (DAISY), which was developed through speaking book libraries to lead the global transition from analog to digital books. The DAISY Standard provides for maximum versatility in combining text and audio from audio-only, to text-only, full text and audio. Based on [9] OrCam MyEye 2.0 is a smart technology that uses computer vision algorithms to help people with vision problems with the addition of wearable plat- forms. Its primary goals are to enhance the dignity of individuals and to help visually disabled people communicate by themselves. The design is very basic, lightweight, secure, and a pair of glasses could be clipped on. The attached camera can read text instantly using the motion of the person using any surface, and produces a loud voice using a small speaker for the user. Even the machine can identify real-time objects, items, and money notes. While in [7], Smart Book Reader will assist the blind people or those with low vision to read the book without using braille. This uses IoT technology, using an IoT system, an IoT network and a service. Raspberry Pi, an IoT computer, is used and is very energy efficient as it requires just 5V of power to operate. Using the camera, the book reader captures the picture of the book pages and the book reader processes the images using the Optical Character Recognition (OCR) Software. Once the image is recognized it will be read aloud by the book reader. Therefore, blind people or those with poor vision can understand it without having to touch it with their fingertips. By using this book reader, the user can enjoy both softcopy and hardcopy books, by using online text to voice converter with a help of IoT connectivity protocol such as Wi-Fi and 4G services. For hardcopy book, a camera is embedded to capture the page. Table 1. Comparison of similar product Device Description Disadvantage Daisy Digital Book Reader Android-based mobile application that uses an audio representation of a print. Blind user can navigate through the application using voice commands & predefined gestures. Provide access to information of the standard audio book. Cannot read a real book, limited of audio book and a blind needs helper to use it OrCam MyEye 2.0 OrCam mainly consists of the RGB camera and portable computer. This can be attached to any eyeglass frame. It alerts the user via the audio signals outside information. Unaffordable Price. Book Reader Using the camera, the book reader takes the picture of the book pages and the book reader analyses the pictures using OCR Software. Once the image is recognized it will be read aloud by the book reader. Only read in one language which is English iJIM ‒ Vol. 15, No. 19, 2021 81 Paper—A Development of Multi-Language Interactive Device using Artificial Intelligence… In Table 1, the comparison includes three different devices depending on how tasks performed, the first device is an app in smartphone which is Daisy Digital Book Reader. The second device is a wearable device named OrCam MyEyes 2.0. The third device is book Reader which uses raspberry pi. The book reader is the best idea to overcome the problem of sufficient resources since there are many books that is not available in digital format. However, there is a few lacking to the device which is its only read and translate to one language which is English. To overcome the problem, propose to build an updated reading device which will read and translate to multiple languages. 3 Research method The development of the reading device for visually impaired is using a model known as Rapid Application Development (RAD). RAD is one of methodology that develop- ers used because it allows the developer to continuously check the entire system during completion. RAD have 4 phases, which are requirement planning, user design, con- struction, and cutover phase. Testing process will be in the cutover phase. Functionality, performance test and usability test will be done on the reading device. Functionality-test require 5MP pi camera, sound and push buttons testing, performance test require font type and size and special character testing and usability test will be run by using System Usability Scale (SUS). The RAD model consists of 4 phases as follows: • Requirement planning phase – Requirement planning phase undergo both system preparation and system analysis such as list of hardware and software needed in the project and user requirement for the prototype. In order to determine user require- ment, literature study about blind people and their limitation, their needs, braille limitation and previous similar product have been analyzed. • User design phase – In this phase, we design the entire framework to create the prototype including flowchart, Gantt chart and milestone. The prototype design and concept is determined in this phase. • Construction phase – In this phase, the prototype is developed based on previous determined design and requirements in previous phases. • Cutover phase- This phase is a testing phase consist of three type of tests which are the functionality, performance and usability test. Functionality test will run 5MP Pi Camera, push buttons and sound testing. This to ensure that all hardware is function perfectly. Performance test will be testing on minimum font size on different font type and special character. This to ensure the reading device can read real books. A questionnaire will be given to users to test the usability of the reading device, this testing known as system usability scale. A final test will be conducted to ensure the prototype is fully ready for use. 4 Design and implementation The design of proposed prototype consists of two parts, which are hardware/device and cloud. The device part is used to take a picture of the page inside the book, while 82 http://www.i-jim.org Paper—A Development of Multi-Language Interactive Device using Artificial Intelligence… cloud part is used to process the taken picture. The taken picture will be processed in Google Cloud using artificial intelligence technology, by 3 phases, image-to-text con- version, translation and text-to speech conversion. The architecture of this design must be user friendly for blind people by using the appropriate structure. The system will be connected to Wi-Fi to use Google Cloud libraries. Fig. 1. Physical design of reading device using Raspberry Pi 4.1 Hardware/device Figure 1 shows the physical design of reading device for visually impaired user. The Pi Camera will connect to camera slot in Raspberry Pi board. The speaker will be connected to audio port, the power supply will connect to power port and the buttons will connect to GPIO pins of the Pi as shown in Figure 2 and Figure 3. iJIM ‒ Vol. 15, No. 19, 2021 83 Paper—A Development of Multi-Language Interactive Device using Artificial Intelligence… Fig. 2. Circuit design of reading device using Raspberry Pi Parts GPIO pins Push Button 1 16 Push Button 2 18 Push Button 3 22 Push Button 4 32 Push Button 5 36 Push Button 6 40 Fig. 3. GPIO of reading device using Raspberry Pi Figure 4 shows the flowchart of the proposed reading device. When the reading device is activated, user have to choose between 6 buttons as show in Figure 4 to con- tinue. The embedded camera can take an image if the user press Button number 2 to 5, representing 4 different languages. Once the image is captured, the image is translated to a text. The text will be translated into the desired language, and then is converted to an audio file. Save the image and the audio file. If not, press the button again and the camera captures the shot. After that, it will play the audio file. To users wishing to play/pause the recording, there will also be a play/pause button on button 1. There is also button 6, that can be used to shut down the reading device. 4.2 Google cloud As described in the previous section, the reading device will be integrated with AI module in Google Cloud. Thus, the following google cloud module need to be installed [10],[11]: • Cloud Vision API is used to perform image to-text conversion. • Cloud Translation API to perform translation of the converted text. • Google Cloud Text-to-Speech to perform the translated text to speech. 84 http://www.i-jim.org Paper—A Development of Multi-Language Interactive Device using Artificial Intelligence… The installed Google APIs are integrated by python programming, shown in Figures 5–7. For Cloud Vision API shown in Figure 5, it will open camera pi and take a picture, the image then will be saved with the name “Book[date][time].jpg”. Next, the text from the image will be extracted and sent to the “translated” function. For Translation API in Figure 6, text that extracted using Vision API will be detected language and translated using the Translation API with the desired language. For text-to-Speech API, in function “synthesize_text”, the translated text will be generated into audio, as shown in Figure 7. Fig. 4. Flow chart for reading device iJIM ‒ Vol. 15, No. 19, 2021 85 Paper—A Development of Multi-Language Interactive Device using Artificial Intelligence… Fig. 5. Vision API for image-to-text conversion Fig. 6. Translation API text translation Fig. 7. Text-to speech translation 86 http://www.i-jim.org Paper—A Development of Multi-Language Interactive Device using Artificial Intelligence… 5 Testing The testing phase consist of performance testing and system usability testing. In performance testing, the developed reading device is tested with various font type and size to identify its capability. For system usability test, users with visual impairment problem assess the device, to identify the effectiveness, efficiency and satisfaction level of the developed reading device. 5.1 Performance test For overall testing, programming script named final.py in folder /home/pi/Desktop/ will be used. It is necessary that the program is reading the book correctly. The accuracy of reading in different font type, size and language will be tested. The usability of the reading device will also be tested on different people and real books. • Alphabetical Font Type and Size According to [12], there are few recommended guidelines on font types and sizes used in making a book. Two of the most widely used serif font in body text of the book are Baskerville and Times Roman. It is recommended not to use font size smaller than 10 points. From [13], the recommended font size for a book are between 10 to 14 points. Most of the adult books used 10 to 11 points font size while 13 to 14 points font size are used in the children’s books. It is important for a book to use the right font size as it is easier and comfortable for the normal reader. The font type and size will be tested on the reading device to get the minimum font size on different font type that the reading device can read aloud. There is a font type which is Lucida handwriting will also be tested as it imitates handwriting of a human being. This will test the reading device on reading a complexity of a font type. The testing results will be shown Tables 2–4. Table 2. Testing results for Times New Roman in English Language: English, Font Type: Times New Roman Font Size 7 8 9 10 11 12 13 14 15 16 Result No No Yes Yes Yes Yes Yes Yes Yes Yes Table 3. Testing results for Baskerville Old Face in English Language: English, Font Type: Baskerville Old Face Font Size 7 8 9 10 11 12 13 14 15 16 Result No No No Yes Yes Yes Yes Yes Yes Yes Table 4. Testing results for Lucida Handwriting in English Language: English, Font Type: Lucida Handwriting Font Size 7 8 9 10 11 12 13 14 15 16 Result No Yes Yes Yes Yes Yes Yes Yes Yes Yes iJIM ‒ Vol. 15, No. 19, 2021 87 Paper—A Development of Multi-Language Interactive Device using Artificial Intelligence… • Special Character Some languages require special character for their writing. In [14], Mandarin, Arabic and Japanese are the top 3 hardest language which uses different character and alphabet. It will be beneficial to visually impairment people to read books in any language. It is important that the languages are translated accurately. Therefore, these languages will be tested on the reading device for accuracy. As claimed in [15], a proper font size for chinse body text is 10.5 points and based on a study by [16], the minimum readable font size in Arabic is 14 points, yet the most recommended font size is 18 points. As stated in [17], the recommended minimum font size is 12 points for Japanese books to avoid low legibility. It is preferable that the reading passed all the minimum font size for every type of font. The testing results will be shown in Tables 5–7. Table 5. Testing results for Mandarin to English Language: Mandarin, Character Type: Pinyin Font Size 7 8 9 10 11 12 13 14 15 16 Result No No No No Yes Yes Yes Yes Yes Yes Table 6. Testing results for Arabic to English Language: Arabic, Character Type: Arabic Font Size 7 8 9 10 11 12 13 14 15 16 Result No No No No No No Yes Yes Yes Yes Table 7. Testing result for Japanese to English Language: Japanese, Character Type: Hiragana Font Size 7 8 9 10 11 12 13 14 15 16 Result No Yes Yes Yes Yes Yes Yes Yes Yes Yes From the performance test, it is proved that the reading device has capability to read usual book from minimum with font size 10. This font size outperforms the recom- mended font by most publishers which is 12. Thus, font 10 can be recommended to the publisher for future references. By using this tool, blind and visual impairment person can access more usual book in future even they are without braille skill. This will help them to access more information and knowledge from books. The product usage is not limited to hardcopy book, but also can be used in digital books and website. On the other hand, this product can solve lack of teachers with braille skill capability, where those without the braille skill can also become a teacher to a blind or visual impairment student. 5.2 Usability test The System Usability Scale (SUS) is originally founded in 1986 by John Brooke [18]–[20]. It offers a simple and easy way to evaluate the usability of the products and 88 http://www.i-jim.org Paper—A Development of Multi-Language Interactive Device using Artificial Intelligence… designs. Another applications that use usability testing to measure user satisfaction rate also can be found in [21]–[23]. SUS is a realistic and accurate method for assessing perceived user friendliness, which can be used across a wide variety of digital goods and services to help programmer and developer to decide whether a design has the solution for overall problem. According to [19], SUS is not diagnostic and is used for an overall usability assessment as described in ISO 9241–11, consisting of the following characteristics. • Effectiveness—can users successfully achieve their objectives? • Efficiency—how much effort and resource is expended in achieving those objectives • Satisfaction—was the experience satisfactory? SUS consists of a 10 questions questionnaire with five choices from Strongly Agree to Strongly Disagree for each respondent. These questions are designed to get user feedback for each test session easily and unfiltered, and to be answered quickly without onerous interaction. The questionnaire will be created on google form platform because it provides organized real time response info and chart. Following are the 10 questions that will be in the questionnaire [18],[19]. From [19], the minimum respondent for reliable data is 5 users feedback. For this project, 6 respondents with visual impairment prolems have been given to experience the prototype and response to the SUS question- aires. Table 8 shows sample of answers’ score from 6 respondents. Table 8. SUS respondents’ score Score Respondent Q1 Q2 Q3 Q4 Q5 Q6 Q7 Q8 Q9 Q10 1 5 3 4 4 5 1 5 2 5 3 2 5 2 4 4 5 1 5 1 5 4 3 5 1 5 1 5 1 5 1 5 1 4 5 1 5 1 5 1 5 1 5 1 5 5 1 5 2 5 1 5 1 5 1 6 4 1 5 1 5 1 5 1 5 2 The SUS score can be calculated as: SUS Score (X+Y)25 R*5 = ×100 (1) Where, X= Total odd-Numbered Questions – 30 and, Y= 150–Total even-numbered Questions. In order to calculate the SUS score, the response scale will be ranging from 1 to 5. Add up the total score for all odd-numbered questions (Q1, Q3, Q5, Q7, Q9) for 6 respondents, then subtract 30 from the total to get (X) because the odd numbered ques- tions express positive attitudes, while the even ones express negative attitudes. In order to calculate SUS score, subtract 1 from each response to odd statements. From this case there are 5 odd numbered questions for each 6 respondents, overall the total score for odd-numbered questions will be subtracted by 30. Then, add up the total score for all iJIM ‒ Vol. 15, No. 19, 2021 89 Paper—A Development of Multi-Language Interactive Device using Artificial Intelligence… even-numbered questions (Q2, Q4, Q6, Q8, Q10), then subtract that total from 150 to get (Y). This explains by subtract corresponding values from 5 in every even-numbered statements. When following the above technique of scoring tabulation, the SUS score will be a score out of 100 which is (91.7/100). This is not a percentage score. According to (usabiliTEST, 2011), It is important to remember that to generate a percentile ranking, raw scores are not expressed as percentages and need to be normalized. SUS scores can be converted into letter grades, which will help convey outcomes. SUS Score Letter Grade Adjective Rating Above 80.3 A Excellent Between 68 and 80.3 B Good 68 C OK Between 51 and 67 D Poor Below 51 F Awful The SUS score for the reading device is above 80.3. This shows that the reading device for visually impaired get an A which is an excellent rating. 6 Conclusion This project was developed to help visual impairment people to overcome the limita- tion of reference book in braille. This reading device would provide visually impaired to get information from an accessible tool in public places such as libraries by making it affordable for the masses. An improvement has been made that detect any to languages and translate into multi-languages such as Malay, English, Tamil and Mandarin com- pared to previous product which only can read out English language. People who have knowledge in python language and Raspberry Pi can built their own reading device with reasonable price and adding more functionality by using Raspberry Pi. 7 Acknowledgment This paper is funded by fund provided by Universiti Teknikal Malaysia Melaka (UTeM) with grant no. PJP/1/2020/FTMK/PP/S01770. 8 References [1] Davis, M. K., Dealwis, C., & Kuang, C. H. (2018, June). Language Policy and Language Use in Multilingual Malaysia. Language Policy and Language Use in Multilingual Malaysia, pp. 1003–1009. [2] https://www.nbp.org/ic/nbp/about/aboutbraille/needforbraille.html [3] h t t p s : / / w w w. n s t . c o m . m y / o p i n i o n / l e t t e r s / 2 0 2 1 / 0 1 / 6 5 5 1 0 7 / m o r e - c a n - b e - d o n e - visually-impaired [4] http://www.wipo.int/pressroom/en/briefs/limitations.html 90 http://www.i-jim.org https://www.nbp.org/ic/nbp/about/aboutbraille/needforbraille.html https://www.nst.com.my/opinion/letters/2021/01/655107/more-can-be-done-visually-impaired https://www.nst.com.my/opinion/letters/2021/01/655107/more-can-be-done-visually-impaired http://www.wipo.int/pressroom/en/briefs/limitations.html Paper—A Development of Multi-Language Interactive Device using Artificial Intelligence… [5] Soleimani Sefat, E., Rostami, M., Shahin, A., & Movallali, G. (2016). The Needs and Problems of Students with Visual Impairment. Journal of Social Sciences and Humanity Studies (2), 8–16. [6] Mohammed, Z., & Omar, R. (2011). Comparison of Reading Performance between Visually Impaired and Normally Sighted Students in Malaysia. The British Journal of Visual Impair- ment, 196–207. https://doi.org/10.1177/0264619611415004 [7] Harum, N., & Zakaria, N. A. (2019). Smart Book Reader for Visual Impairment Person using IoT Device. International Journal of Advanced Computer Science and Applications, 10(2), 251–255. https://doi.org/10.14569/IJACSA.2019.0100233 [8] Thirasi, W., U.P, D., G.C, P., I.M.B.S.C, I., Jayakody, C., & Lokuliyana, S. (2015). Digital Talking Book for Vision Impaired Individual. International Journal of Computer Applications, 121(6), 0975–8887. https://doi.org/10.5120/21548-4569 [9] Shashua, A., & Aviram, Z. (2018). About Orcam: Meet the purpose. Retrieved March 7, 2020, from orcam.com/en/about/ [10] Google. (2020). Google Cloud. Retrieved from Google Cloud APIs: https://cloud.google. com/apis/docs/overview [11] Md Mizan, C., & Chakraborty, T. (2017). Text Recognition using Image Processing. Inter- national Journal of Advanced Research in Computer Science, 8(5), 765–768. [12] Agarwal, A. (2008, May 1). Which Fonts Should You Use for Writing a Book. Retrieved from Digital Inspiration: https://www.labnol.org/internet/blogging/which-fonts-should-you- use-for-writing-a-book/3141/#:~:text=2.,3 [13] Hill, B. (2016, May 2). First Steps in Formatting for Print. Retrieved from The Editor’s Blog: https://theeditorsblog.net/2016/05/02/first-steps-in-formatting-for-print/#:~:text=You’ll% 20also%20need%20to,books%20are%20often%2014%20point [14] Macedo, H. (2015, March 6). Japanese, Finnish or Chinese? The 10 Hardest Languages for English Speakers to Learn. Retrieved from Understanding with unbabel: https://unbabel.com/ blog/japanese-finnish-or-chinese-the-10-hardest-languages-for-english-speakers-to-learn/ [15] Tung, B. (2014, October 28). Best Practices for Chinese Layout. Retrieved from Medium: https://medium.com/@bobtung/best-practice-in-chinese-layout-f933aff1728f#:~:- text=In%20movable%20type%20age%2C%20proper,small%20characters%2C%20 especially%20on%20screen [16] Abubaker, A., & Lu, J. (2012). The Optimum Font Size and Type for Students Aged 9–12 Reading Arabic Characters on Screen: A Case Study. Journal of Physics: Conference Series, 364. https://doi.org/10.1088/1742-6596/364/1/012115 [17] Hayataki, M. (2015, December 9). The Most Comprehensive Guide to Web Typogra- phy in Japanese. Retrieved from MH dIgital: https://hayataki-masaharu.jp/web-typogra- phy-in-japanese/#.X0S9lMgzbIU [18] Brooke, J. (1996). SUS—a quick and dirty usability scale. [19] Symk, A. (2020, March 29). The System Usability Scale & How it’s Used in UX. Retrieved from Medium: https://medium.com/thinking-design/the-system-usability-scale- how-its-used-in-ux-b823045270b7 [20] usabiliTEST. (2011). What is SUS and how it can help you improve usability of your prod- uct? Retrieved from Usability Test: https://www.usabilitest.com/system-usability-scale [21] Nik Azlina Nik Ahmad, and Muhammad Hussaini (2021), “A Usability Testing of a Higher Education Mobile Application Among Postgraduate and Undergraduate Students,” Interna- tional Journal of Interactive Mobile Technologies, 15(09), 88–102. https://doi.org/10.3991/ ijim.v15i09.19943 [22] Fahad Mahmoud Ghabban, Mohammed Hajjar, Saad Alharbi, “Usability Evaluation and User Acceptance of Mobile Applications for Saudi Autistic Children,” International Journal of Interactive Mobile Technologies, 15(07), 30–46. https://doi.org/10.3991/ijim. v15i07.19881 iJIM ‒ Vol. 15, No. 19, 2021 91 https://doi.org/10.1177/0264619611415004 https://doi.org/10.14569/IJACSA.2019.0100233 https://doi.org/10.5120/21548-4569 https://orcam.com/en/about/ https://cloud.google.com/apis/docs/overview https://cloud.google.com/apis/docs/overview https://www.labnol.org/internet/blogging/which-fonts-should-you-use-for-writing-a-book/3141/#:~:text=2.,3 https://www.labnol.org/internet/blogging/which-fonts-should-you-use-for-writing-a-book/3141/#:~:text=2.,3 https://theeditorsblog.net/2016/05/02/first-steps-in-formatting-for-print/#:~:text=You’ll%20also%20need%20to,books%20are%20often%2014%20point https://theeditorsblog.net/2016/05/02/first-steps-in-formatting-for-print/#:~:text=You’ll%20also%20need%20to,books%20are%20often%2014%20point https://unbabel.com/blog/japanese-finnish-or-chinese-the-10-hardest-languages-for-english-speakers-to-learn/ https://unbabel.com/blog/japanese-finnish-or-chinese-the-10-hardest-languages-for-english-speakers-to-learn/ https://medium.com/@bobtung/best-practice-in-chinese-layout-f933aff1728f#:~:text=In%20movable%20type%20age%2C%20proper,small%20characters%2C%20especially%20on%20screen https://medium.com/@bobtung/best-practice-in-chinese-layout-f933aff1728f#:~:text=In%20movable%20type%20age%2C%20proper,small%20characters%2C%20especially%20on%20screen https://medium.com/@bobtung/best-practice-in-chinese-layout-f933aff1728f#:~:text=In%20movable%20type%20age%2C%20proper,small%20characters%2C%20especially%20on%20screen https://doi.org/10.1088/1742-6596/364/1/012115 https://hayataki-masaharu.jp/web-typography-in-japanese/#.X0S9lMgzbIU https://hayataki-masaharu.jp/web-typography-in-japanese/#.X0S9lMgzbIU https://medium.com/thinking-design/the-system-usability-scale-how-its-used-in-ux-b823045270b7 https://medium.com/thinking-design/the-system-usability-scale-how-its-used-in-ux-b823045270b7 https://www.usabilitest.com/system-usability-scale https://doi.org/10.3991/ijim.v15i09.19943 https://doi.org/10.3991/ijim.v15i09.19943 https://doi.org/10.3991/ijim.v15i07.19881 https://doi.org/10.3991/ijim.v15i07.19881 Paper—A Development of Multi-Language Interactive Device using Artificial Intelligence… [23] Subashini Annamalai, Yusrita Mohd Yusoff, Harryizman Harun, “User Acceptance of ‘Let’s Talk Now’ Mobile App for Dysarthric Children,” International Journal of Interactive Mobile Technologies, 15(06), 91–107. https://doi.org/10.3991/ijim.v15i06.20679 9 Authors Norharyati Harum is currently a senior lecturer at faculty of ICT, Universiti Teknikal Malaysia (UTeM). She received B. Eng, MSc. and PhD in Engineering from Keio University, Japan. She has working experince in R & D Next Generation Mobile Communication Department at Panasonic Japan. Her research area includes Internet of Things (IoT), Embedded System, Wireless Sensor Network, and Signal Processing. E-mail: norharyati@utem.edu.my Nur’Aliah Izzati Binti Md Sallehuddin Khan is a student in Computer Science (Computer Networking) at University Teknikal Malaysia (UTeM). Noraswaliza Abdullah is currently a senior lecturer in Faculty of ICT at the Universiti Teknikal Malaysia Melaka. She received her PhD from the Queensland University of Technology, Australia. Her work includes developing recommender system techniques by applying data mining techniques. Her research interests include data mining, recom- mender system and database technology. E-mail: noraswaliza@utem.edu.my Nurul Akmar Emran received a bachelor degree in Management Information System (MIS) from the International Islamic University Malaysia in 2001, an MSc in Internet and Database Systems from London South Bank University in 2003, and a Ph.D. degree in computer science from the University of Manchester, the UK in 2011. In 2004, she joined the Department of Software Engineering, Universiti Teknikal Malaysia Melaka, as a Lecturer. Her current research interests include database sys- tems, storage space optimization, mobile analytics, and data quality. E-mail: akmar@ utem.edu.my Nurul Azma Zakaria, graduated with B.Eng from Salford University, UK, MSc from UMIST, UK, and PhD from Saitama University, Japan. Her research interests are System-Level Design, Embedded System, Cyber-Physical System (CPS), Internet of Things, IPv6 Migration and 6LoWPAN. E-mail: azma@utem.edu.my Erman Hamid is currently a senior lecturer at faculty of ICT, Universiti Teknikal Malaysia (UTeM). He received BIT (Hons) from Universiti Utara Malaysia and MIT (Computer Science) from Universiti Kebangsaan Malaysia. His research area are Internet of Things (IoT) and Network Visualization. E-mail: erman@utem.edu.my Syarulnaziah Anawar is currently a Senior Lecturer at the Faculty of Information and Communication Technology, UTeM. She received her PhD in Computer Science from UiTM, Malaysia. She is a member of the Information Security, Digital Forensic, and Computer Networking (INSFORNET) research group. Her research interests in- clude human-centered computing, participatory sensing, mobile health, usable security, and societal impact of IoT. E-mail: syarulnaziah@utem.edu.my Article submitted 2021-05-20. Resubmitted 2021-06-19. Final acceptance 2021-06-20. Final version published as submitted by the authors. 92 http://www.i-jim.org https://doi.org/10.3991/ijim.v15i06.20679 mailto:norharyati@utem.edu.my mailto:noraswaliza@utem.edu.my mailto:akmar@utem.edu.my mailto:akmar@utem.edu.my mailto:azma@utem.edu.my mailto:erman@utem.edu.my mailto:syarulnaziah@utem.edu.my