Paper—Context-aware Cloud-based Mobile Application for Assessment and Training of Visual Cogni… Context-aware Cloud-based Mobile Application for Assessment and Training of Visual Cognitive Abilities https://doi.org/10.3991/ijim.v11i6.7438 Hanan Elazhary King Abdulaziz University, Jeddah, Saudi Arabia; Electronics Research Institute, Cairo, Egypt helazhary@kau.edu.sa; hananelazhary@eri.sci.eg Abstract—Context-aware mobile applications can adapt to di�erent mo- bile, user and application contexts. Mobile cloud computing has been integrated with those applications to exploit the relatively infinite cloud resources. This paper proposes a context-aware cloud-based MObile application for assessment and training of visual Cognitive Abilities (MOCA). Those abilities, such as the visualization ability of recognizing rotated objects, constitute an integral part of student intelligence. The need to ubiquitously and continuously deliver exercis- es relevant to a speci!c visual cognitive ability or skill according to the student pro!ciency and context has stimulated proposing MOCA. Integrating cloud computing with MOCA allows creating an extendible repository on the cloud such that the visual material does not affect and is not affected by the relatively limited mobile resources. In MOCA, we propose a hierarchical data structure suitable for the assessment of the various cognitive abilities and skills in terms of related ones. MOCA is also a framework for building applications based on visual cognitive abilities, such as teaching visual science concepts and the visu- al classi!cation and diagnosis of medical images, and possibly training and as- sessment systems for other types of cognitive abilities. Two prototype mobile applications have been developed based on MOCA for the visualization ability and for visual classi!cation of science concepts. Empirical evaluation has shown the effectiveness of MOCA in training the students and the satisfaction of the students and teachers with its capabilities. Keywords—CHC, cognitive abilities, context awareness, mobile cloud compu- ting, visual ability, visual classi!cation, visual-spatial skills, visualization 1 Introduction The widespread use of smartphones has stimulated the development of various types and an extremely large number of mobile applications. An important type of those applications is context-aware mobile applications that change in behavior ac- cording to context. Generally, we can de!ne context as the state of the mobile appli- cation, the mobile user or the mobile itself [1, 2]. An example of those applications is intelligent adaptive mobile user interfaces that have been proposed for the healthcare personnel [3]. Such interface can, for example change the sound volume to silent 86 http://www.i-jim.org Paper—Context-aware Cloud-based Mobile Application for Assessment and Training of Visual Cogni… automatically when in the operating room in a hospital. Mobile cloud computing [4] is concerned with mobile applications that bene!t from the cloud resources such as Big Data [5] and has been proposed for context-aware mobile applications [6]. Intelli- gent tutoring systems (ITSs) have been developed over the years for many domains such as database systems [7], genetics problem solving [8], mathematics [9], object- oriented analysis and design [10], and language learning [11]. Those systems are intended to help in tutoring the students in addition to or in the absence of human tutors. Such systems can exhibit different forms of intelligence such as the ability to provide problems relevant to the student knowledge level and pro!ciency, the ability to identify the student errors and provide relevant advice and help and the ability to provide explanations. Mobile ITSs have been proposed in the literature to be ubiqui- tously available to the student [12, 13]. Context-aware mobile ITSs have been pro- posed [14, 15] to provide relevant material to the student according to context (includ- ing student pro!ciency and level of knowledge). Cloud-based context-aware mobile ITSs have also been proposed to store and deliver relevant tutoring material from the cloud [16, 17]. Cognitive abilities [18] comprise the student intelligence. An important type of those abilities are visual cognitive abilities including visualization or visual-spatial skills such as the ability to match objects and recognize transformed objects. Those skills have been shown to be of ultimate importance for improving the students’ com- petence in Science, Technology, Engineering, and Mathematics (STEM) [19]. Im- proving those abilities and skills, thus has been the goal of some ITSs [20-22]. Few ITSs have been developed for speci!c domains such as improving the visualization skills of engineers [23-25] and the diagnosis skills of medical images [26-29]. Very few ITSs aimed at assessing different types of cognitive abilities of the students to teach them accordingly (for example verbally versus visually) [30, 31]. Due to the ultimate importance of visual cognitive abilities and skills, this paper proposes an original context-aware cloud-based MObile application for assessment and training of visual Cognitive Abilities (MOCA). Being a mobile application allows MOCA to be used ubiquitously and continuously. Exploiting mobile-cloud computing in MOCA is suitable for those abilities and skills since corresponding exercises inher- ently include visual material that require a relatively large storage space that may not be possible due to the limited mobile resources. This also allows creating an extendi- ble repository on the cloud, that allows adding new material and exercises and sharing them to provide continuous (and possibly life-long) training of visual cognitive abili- ties and their applications. This is of ultimate importance since, typically there is not a clear de!ned set of constraints or rules that can be mastered by the student to be able to solve similar exercises. In medical diagnosis ITSs, for example being able to diag- nose a medical condition by examining some medical images does not necessarily imply the ability to correctly diagnose images of future cases. Accordingly, images of new cases would be continuously added to help improve diagnosis skills. Context- awareness allows delivering material relevant to a given visual cognitive ability, skill or corresponding application according to the student assessed pro!ciency and con- text. In medical diagnosis ITSs, the context could be the type of medical image diag- nosed. The contributions of the paper can by summarized as follows: iJIM ‒ Vol. 11, No. 6, 2017 87 Paper—Context-aware Cloud-based Mobile Application for Assessment and Training of Visual Cogni… • Proposing MOCA, an original context-aware cloud-based mobile application for assessment and training of various visual cognitive abilities and skills. • Exploiting mobile cloud computing for developing an extendible central repository of visual material and exercises for ubiquitous and continuous training without af- fecting or being affected by the limited mobile resources. • Incorporating context-awareness in MOCA to make it capable of delivering mate- rial relevant to a speci!c cognitive ability, skill or a corresponding application in addition to the student assessed pro!ciency and context. • Proposing a hierarchical data structure suitable for the assessment of the various cognitive abilities and skills in terms of related ones. • Designing MOCA as a generic framework that can be tailored to numerous appli- cations and alternative cognitive abilities stimulating future research in this promis- ing almost untackled area. • Implementing and empirically evaluating two prototype applications based on MOCA to validate its effectiveness in addition to user satisfaction. The rest of the paper is organized as follows: Section 2 provides overview about cognitive abilities with emphasis on visual cognitive abilities. Section 3 discusses related research in the literature. The details of MOCA are provided in Section 4. Example prototype applications of MOCA are presented in Section 5. The results of their empirical evaluation in addition to a discussion of the qualitative features of MOCA are provided in Section 6. Finally, Section 7 presents the conclusion of the paper and discusses possible future research. 2 Visual Cognitive Abilities Cognitive abilities have been de!ned broadly as the abilities to process mental in- formation [32]. The Cattell-Horn-Carroll (CHC) theory [18] identi!es sixteen coarse- grained cognitive abilities. Some of those abilities are recursively grouped according to their functions and each is classi!ed into !ne-grained abilities and skills. Figure 1 shows a portion of the CHC hierarchy. As shown in the !gure General Intelligence (g) is at the top of the hierarchy. Four of the sixteen coarse-grained cognitive abilities (under g) are shown in the !gure grouped as Sensory Skills (S) and two others are grouped as Motor Skills (M). Those six cognitive abilities are regrouped as Sensory- Motor Skills (SM). Details of the different cognitive abilities including those shown in the !gure are beyond the scope of this paper and we refer interested readers to [18]. Visual cognitive abilities are those involving visual exercises. Those include Visual Processing (Gv) and its sub-abilities and skills. Associative Memory (MA) refers to the ability to recall an item given its pair. This can apply to recalling the name of a visual object. Complex visual cognitive abilities include Figural Fluency (FF), which is the ability to easily sketch examples or more detailed !gures given a visual hint. It is worth noting that such complex visual cognitive abilities whose assessment and training cannot be easily automated are beyond the scope of this paper. 88 http://www.i-jim.org Paper—Context-aware Cloud-based Mobile Application for Assessment and Training of Visual Cogni… 2.1 Visual processing According to the CHC theory, visual processing (Gv) is an important factor of in- telligence. It has been de!ned broadly as the abilities to manipulate (e.g. preserve, retrieve, transform and generate) visual images and patterns [33]. The CHC theory identi!es eleven such abilities including those shown in Figure 1: • Visualization (Vz) or the ability to match objects even when transformed (e.g. rota- tion of an object in two or three dimensions). • Visual Memory (MV) or the ability to preserve images and recognize them later. • Imagery (IM) or the ability to mentally imagine clear images of objects and events. Fig. 1. Portion of the CHC hierarchy. 2.2 Visualization Visualization (or visual ability) has been shown to be of ultimate importance for STEM domains and that those who have exceptional visual-spatial skills are talented candidates for STEM domains even if they have poorer verbal and mathematical skills [19]. Visual-spatial teaching of science has been shown to improve the grades of fourth grade students [34]. Visualization has been shown to be correlated with visual-spatial intelligence of sixth grade students in mathematics and geometry [35]. Andersen [36] emphasized the need to include tutoring of those skills (together with imagery) in gifted education due to their importance for STEM including develop- ment of scienti!c theories [37]. Many general tests of visual-spatial skills have been developed, are widely used for assessing the visual-spatial skills of children [21], and can be used for training subjects of any age. Those tests include the following: • Stanford-Binet Pattern Analysis that tests the ability of the student to reconstruct geometric shapes from their components. • Stanford-Binet Paper Folding that tests the ability of the student to recognize the shape of a folded paper after being unfolded. iJIM ‒ Vol. 11, No. 6, 2017 89 Paper—Context-aware Cloud-based Mobile Application for Assessment and Training of Visual Cogni… • Benton Test of Block Construction that tests the ability of the student to construct blocks matching a given image. • Benton Judgement of Line Orientation that tests the ability of the student to recog- nize whether the given lines have similar angles. • Benton Test of Form Recognition that tests the ability of the student to match de- signs. • Benton Test of Facial Recognition that tests the ability of the student to recognize similar faces. • Test of Mental Rotations that tests the ability of the student to identify and recog- nize rotated objects. Due to the importance of visual-spatial skills in STEM, many tests have been es- tablished in corresponding domains especially the engineering domain. Classical tests [38] include: • Mental Cutting Test (MCT) that is composed of 25 questions requiring the student to recognize 3D objects cut be plans. • Differential Aptitude Test: Space Relations (DAT: SR) that is composed of 50 ques- tions requiring the student to recognize folded objects. • Mental Rotations Test (MRT) that is composed of 20 questions requiring the stu- dent to recognize rotated objects given their 2D views. • The Purdue Spatial Visualization Tests: Rotations (PSVT: R) that is composed of 30 questions requiring the student to recognize objects rotated in space to the same extent as another one. 3 Related Work This section presents related research in the literature. Particularly, it discusses ITSs that have been proposed in the literature for teaching visual cognitive abilities in addition to mobile ITSs. 3.1 ITSs of visual cognitive abilities Relatively few ITSs have been proposed in the literature for tutoring visual-spatial skills. For example, a system has been developed targeting young children in the age range of six to ten [20] and another system targets children suffering from learning difficulties [21]. Improving the ability to rotate 3D objects has specifically been con- sidered [22]. Similarly, a 3D immersive trainer has been enhanced with haptic interac- tions to help students improve the ability to correctly rotate 3D objects [39]. An edu- cational game has been proposed to improve the visual-spatial abilities of players to recognize the relationship between 2D and 3D maps [40]. Some systems have been developed for domain-speci!c applications based on vis- ual cognitive abilities. In the engineering domain, for example a system has been proposed for improving visual-spatial skills in engineering and architecture by im- proving the visualization ability of engineers to deduce 3D shapes from 2D projec- 90 http://www.i-jim.org Paper—Context-aware Cloud-based Mobile Application for Assessment and Training of Visual Cogni… tions. A visual sweeper helps in tutoring the students by solving the missing view problems and a visual teacher is used to criticize students’ partial solutions [23]. An- other system has been developed for improving the ability of engineers to manipulate a robotic arm [24]. Augmented Reality (AR) has also been exploited in an educational application to help improve the visual-spatial skills of engineering students to better understand engineering graphical subjects [25]. In the medical domain, the SlideTutor [28] has been proposed for improving the ability to visually classify and diagnose in"ammatory skin diseases. Other similar systems have been proposed for medical diagnosis in radiology [26, 27, 29]. 3.2 Mobile ITSs As previously noted, mobile ITSs have been proposed in the literature to be ubiqui- tously available to students [12, 13]. Context-aware mobile ITSs have also been pro- posed [14, 15] to provide relevant material to the students according to context. For example, in a system developed for training nurses [14], the mobile emulates medical equipment and a dummy body equipped with sensors is used to !gure out whether the nurse conducts the necessary assessment operations at the correct body locations relevant to the diagnosed disease. Cloud-based context-aware mobile ITSs have also been proposed to deliver relevant material from the cloud. For example, a system has been proposed for tutoring children with special needs communication skills and language skills [16]. The proposed system delivers relevant domain concepts based on analyzing the context of the mobile user such as the type of disability, location and situation. The system also delivers relevant activities to help improve the student skills. Very few mobile ITSs have been concerned with visual cognitive abilities. For ex- ample, a mobile game aiming at improving visualization of 3D objects has been con- structed [41]. Developing a context-aware mobile tutoring system of visual-spatial skills has been proposed to deliver material relevant to the children pro!ciency [42]. Nevertheless, this is merely a proposal with no details, design or implementation. 4 Proposed System In this section, we present the details of the proposed system MOCA. As shown in Figure 2, MOCA is composed of a cloud-based system, a student mobile application and a teacher application. The cloud-based system is formed of three repositories and four modules in addition to the student and the teacher interfaces. The different repos- itories and modules are discussed in more details in the following sub-sections. iJIM ‒ Vol. 11, No. 6, 2017 91 Paper—Context-aware Cloud-based Mobile Application for Assessment and Training of Visual Cogni… Fig. 2. Block diagram of MOCA (icons from pixabay.com). 4.1 The questions repository This repository stores MOCA questions. Each question has an Id, a set of contents and a correct answer. An example question is shown in Figure 3. This question aims at assessing the visualization skills of the students and their ability to recognize the shape of an object after rotation in 3D. The contents of this question are the question statement, choices and four images. Each question may also be accompanied by an explanation of the correct answer whenever applicable. It Is worth noting that images can be copyrighted using watermarking techniques [43]. 4.2 The hierarchical structure repository The cognitive abilities and skills that are considered in MOCA are arranged hierar- chically in the hierarchical structure repository to facilitate assessing the students’ pro!ciency in each of those abilities and skills in terms of related ones as explained in the following sub-sections. Figure 4 shows an expansion of the CHC hierarchy por- tion shown in Figure 1 as a full hierarchy. Questions are added at the lowest level of the hierarchy as shown in the example in Figure 5. In fact, indexes are added specify- ing Ids of questions in the questions repository. Questions may be grouped hierarchi- cally into types (and subtypes) as shown in the !gure. Each ability or skill in the hier- archy is accompanied by explanation and each node is given a weight (representing its 92 http://www.i-jim.org Paper—Context-aware Cloud-based Mobile Application for Assessment and Training of Visual Cogni… relative importance) such that the summation of the weights of the members of each group of peer nodes is equal to one. For example, the summation of the weights of Type1, Type2 and Type3 shown in the !gure is equal to one. In case no weights are explicitly provided, members of each group of peer nodes are assumed to have equal weights. Fig. 3. An example visualization question. 4.3 The student model repository This repository is responsible for storing log information concerning the students’ performance. In other words, it stores information regarding the questions that each student has correctly/incorrectly solved. For statistical purposes, it stores information about the number of trials of each question in addition to the timestamp of submitting the answer in each trial and the length of time taken before submitting the answer. 4.4 The reporting module The function of this module is to generate reports concerning the pro!ciency of the students (obtained from the evaluation module explained below) and their log infor- mation (obtained from the student model repository) when requested by their teach- ers. Reports may be used to trigger updates to the questions repository and/or the hierarchical structure repository. For example, they can be used to !gure out whether additional questions should be added, some very hard questions should be removed or questions should be regrouped based on their difficulty levels. They may also trigger changes to the weights given to the different nodes of the hierarchical structure or to the explanations accompanying the questions to clarify them further. iJIM ‒ Vol. 11, No. 6, 2017 93 Paper—Context-aware Cloud-based Mobile Application for Assessment and Training of Visual Cogni… Fig. 4. Expansion of the CHC hierarchy portion shown in Figure 1. 4.5 The update module This is the module through which changes can be made to the questions repository and the hierarchical structure repository. It has to be designed with care to take into account the interdependencies among the different repositories. Additions to the ques- tions repository is totally independent. Modi!ed questions are deleted and added as new questions. Deleted questions have to be deleted from the other two repositories. When a question is deleted from the hierarchical structure repository, the weights of its peers have to be adjusted accordingly. Questions that are deleted from the student model repository may be archived to be provided by the reporting module upon re- quest. On the other hand, modi!cations to the hierarchical structure repository have no effect on the other two repositories. This is because the questions repository stores questions independently and the student model repository merely records information regarding questions that have been solved by each student either correctly or incor- rectly. 4.6 The teacher application This is the application through which the teacher interacts with the teacher inter- face on the cloud in order to effect updates through the update module or request reports from the reporting module. There are several ways that such an application can be realized. For example, it can be developed as a Windows application (with remote access). Alternatively, all the functionalities of this application can be provid- ed on the cloud and can be accessed through any Web browser. 94 http://www.i-jim.org Paper—Context-aware Cloud-based Mobile Application for Assessment and Training of Visual Cogni… Fig. 5. Example hierarchical structure. 4.7 The evaluation module The evaluation module is responsible for managing the student model repository. It updates the information in this repository whenever a question is solved by the stu- dent upon request from the student interface. It is mainly responsible for assessing the pro!ciency of the student in a given cognitive ability or skill depending on the num- ber of correctly and incorrectly solved questions and the weights of the hierarchical structure nodes. Evaluation can be initiated at any node in the hierarchical structure as needed. For example, referring to Figure 5, suppose that all peer nodes have equal weights and the goal is to assess the pro!ciency of the student in solving Mv ques- tions. Suppose also that the student answered all the questions incorrectly except Q11, Q21, and Q31. The module starts by evaluating the proficiency of the student in each of Type1, Type2, and Type3 questions. In case of Type1, for example P(Q11), which is the pro!ciency of the student in this question is 1 while the proficiency in all the other peer questions is 0. Given that w symbolizes weight, this module evaluates the pro!ciency of the student as follows: iJIM ‒ Vol. 11, No. 6, 2017 95 Paper—Context-aware Cloud-based Mobile Application for Assessment and Training of Visual Cogni… P(Type1) = P(Q11)!"w(Q11) + P(Q12)!"w(Q12) + P(Q13)!"w(Q13) + P(Q14)!"w(Q14) In other words, the pro!ciency of the student in Type1 is estimated to be 0.25. Similarly, P(Type2) and P(Type3) are estimated to be equal to 1 and 0.25 respective- ly. Finally, the pro!ciency of the student in Mv is evaluated as follows: P(Mv) = P(Type1)!"w(Type1) +P(Type2)!"w(Type2) +P(Type3)!"w(Type3) Accordingly, P(Mv) is estimated to be equal to 0.25*1/3 + 1*1/3 + 0.25*1/3, which is equal to 0.5. It is worth noting that in case the student has not yet solved all the relevant questions, unsolved questions are considered incorrectly solved ones. 4.8 The reasoning module The reasoning module is responsible for providing information regarding the hier- archical structure abilities, skills, and questions types and subtypes so that the student can select the node at which training should proceed. It provides questions relevant to a selected node or the student context. For example, in a medical diagnosis ITS, the student speci!es the type of medical image examined and MOCA decides which question set(s) should be considered. This module can work in different modes. In the default mode, the student is first provided with all the questions under the node of interest. Referring to Figure 5, a student getting trained in Type 3 questions should be provided with questions Q31, Q32, Q33, and Q34. In each subsequent phase, incorrectly solved questions are pro- vided to help in training the student. In a slightly different mode, the student may be provided with both correctly and incorrectly solved questions until they are all cor- rectly solved in a single phase. In case of training at a higher-level node such as Mv, for example the pro!ciencies of the student in Type1, Type2 and Type3 are used as a trigger to determine the one with the lowest pro!ciency to start with its questions set. Such information is obtained from the evaluation module. 4.9 The student mobile application The mobile application is responsible for communicating with MOCA through the student interface. Using this application, the student is able to view the available cog- nitive abilities and skills and questions types and subtypes and to request evaluation of his/her pro!ciency in any or all of them. Such a request is forwarded to the evalua- tion module. A training request, on the other hand, is forwarded to the reasoning module as explained above. The context of the student is also transmitted through this application to the reasoning module to act accordingly. Whenever a question is solved by the student, the details of the student’s performance are forwarded through the student interface to the evaluation module to update the student model repository accordingly. 96 http://www.i-jim.org Paper—Context-aware Cloud-based Mobile Application for Assessment and Training of Visual Cogni… 5 Prototype Applications Many applications could be developed based on MOCA. We chose to develop a prototype to assess the visualization pro!ciency of the students. Towards this goal, a set of questions have been prepared, each depicting an object and three candidate images of the object after transformation. The student was requested to specify which of the three images corresponded to the object. The prototype operated in phases as discussed above, but whenever an incorrectly solved question was provided in a fol- lowing phase, the order of the three images was changed to reduce the effect of mem- orization. An example practical science application has also been developed to help students recognize a set of plant types. Fig. 6. De!nition of a portion of the hierarchical structure shown in Figure 5. To add questions to the questions repository, the teacher application allows speci- fying an Id for each question, the question statement, choices, four images and the correct answer. To de!ne the hierarchical structure, we designed a simple language with which each node is given a label and may be given a corresponding explanation. Each node is then de!ned in terms of its children starting from the root. In other words, a node may not be de!ned as a child of another unless the parent has been de!ned. De!nition of part of the hierarchical structure shown in Figure 5 is depicted in Figure 6. As shown in the !gure, the members of each group of peer nodes can be given distinct weights in fraction form provided that the sum of the weights is equal to one. 6 Discussion and Evaluation In this section, we present a discussion of MOCA and the results of its empirical evaluation. In other words, we evaluate it both quantitatively and qualitatively. 6.1 Training Effect To assess the effect of MOCA on training the students, 20 volunteer students in the age range of 15 to 21 were asked to get trained using MOCA for a two-hour session using the visualization application. Another group of 20 students were trained using iJIM ‒ Vol. 11, No. 6, 2017 97 Paper—Context-aware Cloud-based Mobile Application for Assessment and Training of Visual Cogni… the science application. Figure 7 shows two power curves depicting the average per- centage (across all the students) of errors in each training cycle in the two applications (divided by 100). As shown in the !gure, MOCA has a positive effect on training the students and reducing the number of errors on subsequent phases. It is worth noting that power curves are commonly used to evaluate ITSs and usually result when the assessed variable is the concept being learned and is represented by a set of con- straints or rules [7]. In MOCA, the set of questions represent the assessment and train- ing material. Fig. 7. Power curves depicting the effect of MOCA on training the students in the (a) visuali- zation & (b) science applications. 6.2 Satisfaction Questionnaire A satisfaction questionnaire formed of ten questions was prepared aiming at as- sessing one factor, which is the satisfaction of both students and teachers with MOCA. The respondents were asked to reply to each question on a Likert scale be- tween 1 and 7, where 1 indicates extreme dissatisfaction and 7 indicates extreme sat- isfaction. The average was computed for each respondent in addition to the global average. The global average for the students was 6.2 and for the teachers was 6.5. The internal consistency and trustworthiness of the questionnaire results were estimated using Cronbach's alpha. The obtained values of # were 0.75 and 0.71 respectively indicating high degrees of reliability of the questionnaire results. 6.3 Discussion In this section, we present a discussion of MOCA and its advantages and limita- tions. It is clear that MOCA has several advantages. Since cognitive abilities and skills need continuous training until pro!ciency is assumed, a mobile application such as MOCA is a convenient solution that can be used ubiquitously anywhere and at any time. In fact, there is no clear pro!ciency measure and thus lifelong continuous train- 98 http://www.i-jim.org Paper—Context-aware Cloud-based Mobile Application for Assessment and Training of Visual Cogni… ing is favorable. As previously noted combining cloud-computing with MOCA allows creating a central repository on the cloud in which new training material can be added as it becomes available. This also allows saving the space-consuming visual material on the cloud and providing it to the mobile application as needed saving valuable and relatively limited mobile resources. Additionally, training can proceed at any selected node in the hierarchical structure or according to the student context. For teachers and system administrators, MOCA facilitates adding the visual ques- tions and de!ning the hierarchical structure of cognitive abilities and skills in addition to specifying types and subtypes of the questions as needed. The assessment of the student skills can be conveniently and automatically computed starting at any node in the hierarchical structure. Similarly, reports concerning the pro!ciency of the students can be generated as needed. MOCA can be applied to visualization skills that are especially important in STEM and for many applications such as the visual diagnosis of medical images. It can be used as an ITS in case the goal is to train the students to solve a speci!c set of exercises such as recognizing parts of a human body or similar visual science concepts. Nevertheless, MOCA is designed for cognitive abilities and skills and related applications in which exercises can be provided using visual materi- al and answers are speci!c. Modi!cations should be effected to render MOCA suita- ble for other types of cognitive abilities. 7 Conclusion This paper presented MOCA, a context-aware cloud-based mobile application for assessment and training of visual cognitive abilities. MOCA integrates the advantages of mobile applications, context-awareness and mobile cloud computing. In MOCA, we proposed a hierarchical data structure suitable for the assessment of the various cognitive abilities and skills in terms of related ones as opposed to the classical CHC hierarchy. This is in addition to generating corresponding reports. For teachers and system administrators, it facilitates de!ning the hierarchical structure and its questions details. MOCA is original since, to the best of our knowledge, this is the !rst mobile application that is integrated with context awareness and cloud computing to provide training for a wide-range of visual cognitive abilities and skills and related applica- tions and their assessment in terms of related ones. Two applications have been de- veloped based on MOCA and the results of their empirical evaluation have shown the effectiveness of MOCA in training the students and the satisfaction of the students and teachers with its capabilities. As future work, we intend to extend MOCA to include basic and classical general- purpose professional training material and tests and those corresponding to the engi- neering domain. It is intended to be used for long-term training and assessment of visual cognitive abilities for students at the university level and the results will be reported in subsequent papers. It will be also applied in the medical domain for build- ing a central repository for training of the visual classi!cation and diagnosis skills of medical images. Additionally, it will be used as a starting point for developing ITSs for visual concepts in domains such as science and geography. We hope that MOCA iJIM ‒ Vol. 11, No. 6, 2017 99 Paper—Context-aware Cloud-based Mobile Application for Assessment and Training of Visual Cogni… will trigger extensive future research in this promising almost untackled domain and would be exploited as a framework to guide developing ITSs for other types of cogni- tive abilities and skills and their related applications. 8 References [1] Dey, A. (2001). Understanding and using context. Personal and Ubiquitous Computing, 5(1):4-7. https://doi.org/10.1007/s007790170019 [2] Elazhary, H., Althubyani, A., Ahmed, L., Alharbi, B., Alzahrani, N. and Almutairi, R. (2017). Context management for supporting context-aware Android applications develop- ment. International Journal of Interactive Mobile Technologies, 11(4):186-201. https://doi.org/10.3991/ijim.v11i4.6952 [3] Elazhary, H. (2015). A cloud-based framework for context-aware intelligent mobile user interfaces in healthcare applications. Journal of Medical Imaging and Health Informatics, 5:1680-1687. https://doi.org/10.1166/jmihi.2015.1620 [4] Dinh, H., Lee, C., Niyato, D. and Wang, P. (2013). A survey of mobile cloud computing: Architecture, applications, and approaches. Wireless Communications and Mobile Compu- ting, 13:1587-1611. https://doi.org/10.1002/wcm.1203 [5] Elazhary, H. (2014). Cloud computing for Big Data. MAGNT Research Report, 2(4):135- 144. [6] Khan, A., Othman, M., Xia, F. and Khan, A. (2015). Context-aware mobile cloud compu- ting and its challenges. IEEE Cloud Computing, 2(3):42-49. https://doi.org/10.1109/ MCC.2015.62 [7] Mitrovic, A. and Ohlsson, S. (1999). Evaluation of a constraint-based tutor for a database language. International Journal of Arti!cial Intelligence in Education, 10:238-256. [8] Corbett, A., Kauffman, L., Maclaren, B., Wagner, A. and Jones, E. (2010). A cognitive tu- tor for genetics problem solving: Learning gains and student modeling. Journal of Educa- tional Computing Research, 42(2):219-239. https://doi.org/10.2190/EC.42.2.e [9] Arnau, D., Arevalillo-Herraez, M., Puig, L. and Gonzalez-Calero, J. (2013). Fundamentals of the design and the operation of an intelligent tutoring system for the learning of the arithmetical and algebraic way of solving word problems. Computers & Education, 63:119-130. https://doi.org/10.1016/j.compedu.2012.11.020 [10] Gopalakrishnan, M., Kumar, Y. and Sangaiah, A. (2014). A method of constraint-based tu- tor for object-oriented analysis and design curriculum. SSRG International Journal of Computer Science and Engineering, 1(9):16-22. [11] Elazhary, H. and Khodeir, N. (2017). A cognitive tutor of Arabic word root extraction us- ing arti!cial word generation, scaffolding and self-explanation. International Journal of Emerging Technologies in Learning, 12(5):36-49. https://doi.org/10.3991/ijet.v12i05.6651 [12] Brown, Q. (2009). Mobile intelligent tutoring system: Moving intelligent tutoring systems o� the desktop. Master’s thesis, Drexel University, Philadelphia, PA, USA. [13] Badaracco, M., Liu, J. and Martinez, L. (2013). A mobile app for adaptive test in intelli- gent tutoring system based on competences. 9th International Conference on Intelligent Environments, Athens, Greece, pp 419-430. [14] Wu, P., Hwang, G., Su, L. and Huang, Y. (2012). A context-aware mobile learning system for supporting cognitive apprenticeships in nursing skills training. Educational Technology & Society, 15(1):223-236. [15] Gomez, S., Zervas, P., Sampson, D. and Fabregat, R. (2014). Context-aware adaptive and personalized mobile learning delivery supported by UoLmP. Journal of King Saud Univer- 100 http://www.i-jim.org Paper—Context-aware Cloud-based Mobile Application for Assessment and Training of Visual Cogni… sity Computer and Information Sciences, 26:47-61. https://doi.org/10.1016/j.jksuci. 2013.10.008 [16] Khemaja, M. and Taamallah, A. (2016). Towards situation driven mobile tutoring system for learning languages and communication skills: Application to users with speci!c needs. Educational Technology & Society, 19(1):113-128. [17] Elazhary, H. (2017). Cloud-based context-aware mobile intelligent tutoring system of technical computer skills. International Journal of Interactive Mobile Technologies, 11(4):170-185. https://doi.org/10.3991/ijim.v11i4.6852 [18] Flanagan, D. and Dixon, S. (2013). The Cattell-Horn-Carroll theory of cognitive abilities. In Encyclopedia of Special Education, Wiley Online Library. [19] Wai, J., Lubinski, D. and Benbow, C. (2009). Spatial ability for STEM domains: Aligning over 50 years of cumulative psychological knowledge solidi!es its importance. Journal of Educational Psychology, 101(4):817-835. https://doi.org/10.1037/a0016127 [20] Connell, M. and Stevens, D. (2002). A computer-based tutoring system for visual-spatial skills: Dynamically adapting to the user's developmental range. 2nd International Confer- ence on Development and Learning, Cambridge, MA, USA. https://doi.org/10.1109/ DEVLRN.2002.1011890 [21] Stevens, D., Cornell, M., Colvin, S., Schwarz, P., Pardi, R. and Pilgrim, B. (2003). Design- ing educational software to improve cognitive abilities; Pilot study results. Interactive Technologies Conference on Training, Education, and Job Performance Improvement, Ar- lington. Virginia, USA. [22] Woolf, B., Romoser, M., Bergeron, D. and Fisher, D. (2003). Tutoring 3-dimensional vis- ual skills: Dynamic adaptation to cognitive level. 11th International Conference on Ar- ti!cial Intelligence in Education, Sydney, Australia. [23] Wang, E. and Kim, Y. (2005). Intelligent visual reasoning tutor. 5th IEEE International Conference on Advanced Learning Technologies, IEEE, Kaohsiung, Taiwan. https://doi.org/10.1109/ICALT.2005.176 [24] Fournier-Viger, P., Nkambou, R. and Mayers, A. (2008). Evaluating spatial representa- tions and skills in a simulator-based tutoring system. IEEE Transactions on Learning Technologies, 1(1):63-74. https://doi.org/10.1109/TLT.2008.13 [25] Martin-Gutierrez, J., Contero, M. and Alcaniz, M. (2010). Evaluating the usability of an augmented reality based educational application. 10th International Conference on Intelli- gent Tutoring Systems, Pittsburgh, PA, USA, pp 296-306. https://doi.org/10.1007/978-3- 642-13388-6_34 [26] Direne, A. (1997). Designing intelligent systems for teaching visual concepts. International Journal of Arti!cial Intelligence in Education, 8:44-70. [27] Pimentel, A. and Direne, A. (1998). Cognitive measures for visual concept teaching with intelligent tutoring systems. 4th International Conference on Intelligent Tutoring Systems, San Antonio, Texas, USA. https://doi.org/10.1007/3-540-68716-5_86 [28] Crowley, R. and Medvedeva, O. (2006). An intelligent tutoring system for visual clas- si!cation problem solving. Arti!cial Intelligence in Medicine, 36:85-117. https://doi.org/10.1016/j.artmed.2005.01.005 [29] Direne, A., Bona, L., Sunye, M., Castilho, M., Silva, F., Garcia, L. and Scott, D. (2009). Authoring adaptive tutoring systems for complex visual skills. 9th IEEE International Con- ference on Advanced Learning Technologies, IEEE, Riga, Latvia. https://doi.org/10.1109/ ICALT.2009.166 [30] Durrani, S. and Durrani, Q. (2010). Intelligent tutoring systems and cognitive abilities. The Graduate Colloquium on Computer Sciences, Lahore, Pakistan. iJIM ‒ Vol. 11, No. 6, 2017 101 Paper—Context-aware Cloud-based Mobile Application for Assessment and Training of Visual Cogni… [31] Lo, J., Chan, Y. and Yeh, S. (2012). Designing an adaptive web-based learning system based on students' cognitive styles identi!ed online. Computers & Education, 58:209-222. https://doi.org/10.1016/j.compedu.2011.08.018 [32] Carroll, J. (1993). Human cognitive abilities, Cambridge University Press. https://doi.org/10.1017/CBO9780511571312 [33] Lohman, D. (1994). Spatial ability. In Sternberg, R. (Eds.), Encyclopedia of Human Intel- ligence, New York: Macmillan. [34] Brokaw, J. (2012). Picture it: Visual-spatial teaching to improve science learning. Master’s thesis, Montana State University, Bozeman, Montana, USA. [35] Yenilmez, K. and Kakmaci, O. (2015). Investigation of the relationship between the spatial visualization success and visual/spatial intelligence capabilities of sixth grade students. In- ternational Journal of Instruction, 8(1):189-204. [36] Andersen, L. (2014). Visual-spatial ability: Important in STEM, ignored in gifted educa- tion. Roeper Review, 36(2):114-121. https://doi.org/10.1080/02783193.2014.884198 [37] Trickett, S. and Trafton, J. (2007). “What if…”: The use of conceptual simulations in sci- enti!c reasoning. Cognitive Science, 31:843-875. https://doi.org/10.1080/03640210701 530771 [38] Marunic, G. and Glazar, V. (2014). Improvement and assessment of spatial ability in engi- neering education. Engineering Review, 34(2):139-150. [39] Berney, S., Haddad, R., Hauck, R. and Gradl, G. (2015). Improving spatial abilities with a 3D immersive environment: A pilot study. 16th Biennial EARLI Conference for Research on Learning and Instruction, Limassol, Cyprus. [40] Nesbitt, K., Sutton, K. and Wilson, J. (2009). Improving player spatial abilities for 3D challenges. 6th Australasian Conference on Interactive Entertainment, Sydney, Australia. https://doi.org/10.1145/1746050.1746056 [41] Martin-Dorta, N., Sanchez-Berriel, I., Bravo, M., Hernandez, J., Saorin, J. and Contero, M. (2010). A 3D educational mobile game to enhance student's spatial skills. 10th IEEE Inter- national Conference on Advanced Learning Technologies, Sousse, Tunisia. https://doi.org/10.1109/ICALT.2010.9 [42] Wiederrecht, M. and Ulinski, A. (2012). Developmentally appropriate intelligent spatial tutoring for mobile devices. 11th International Conference on Intelligent Tutoring Systems, ACM, Chania, Crete, Greece. https://doi.org/10.1007/978-3-642-30950-2_81 [43] Elazhary, H. (2011). A fast, blind, transparent, and robust image watermarking algorithm with extended Torus Automorphism permutation. International Journal of Computer Ap- plications, 32(4). 9 Authors Hanan Elazhary earned her B.Sc. and M.Sc. degrees from the Department of Electronics and Communications Engineering, Cairo University. She earned her Ph.D. degree in Computer Science and Engineering from the University of Connecticut, USA. Currently, she is an associate professor in the Computer Science Department, Faculty of Computing & Information Technology, King Abdulaziz University, Jed- dah, Saudi Arabia and the Computers and Systems Department, Electronics Research Institute, Cairo, Egypt. Her research interests include applied computing, distributed systems, software engineering and intelligent tutoring systems. Article submitted 16 July 2017. Published as resubmitted by the author 30 August 2017. 102 http://www.i-jim.org iJIM – Vol. 11, No. 6, 2017 Context-aware Cloud-based Mobile Application for Assessment and Training of Visual Cognitive Abilities