Paper Title (use style: paper title) CONTEXTUAL MOBILE LEARNING – A STEP FURTHER TO MASTERING PROFESSIONAL APPLIANCES Contextual Mobile Learning A Step Further to Mastering Professional Appliances B. T. David, R. Chalon, O. Champalle, G. Masserey, C. Yin Laboratory LIESP/Ecole Centrale de Lyon, Lyon, France Abstract—In this paper we describe our approach whose objective is to apply MOCOCO concepts to e-learning. After a short presentation of MOCOCO (Mobility, Cooperation, Contextualization) and IMERA (Mobile Interaction in the Augmented Real Environment) principles we will discuss their use in a project called HMTD (Help Me To Do) whose aim is to use wearable computer for a framework of activities of better use, maintenance and repairing of professional appliances. We will successively describe m- learning scope, contextualization and cooperation advantages as well as learning methods. A case study of configuration of wearable computer and its peripherals, taking into account context, in-situ storage, traceability and regulation in these activities finishes this paper. Index Terms—M-learning, contextual learning, just-in-time learning, learning by doing, wearable computer, Computer Augmented Environment, cooperative activities. I. II. III. INTRODUCTION As announced by Weiser in [1], the ubiquitous computing (also known as pervasive computing) seems to become concretizing with the massive propagation of mobile and connected devices (PDAs, TabletPCs, Smartphones, etc) and the use everyday broader of informatics resources as RFID tags [2]. Besides, ubiquitous computing is from 2001 integral part of the Ambient Intelligence (AmI) [3], which merges the “ubiquitous computing” and “social user interfaces” trends to adapt user interfaces to its environment and task context, so to create the proactivity. On the other hand, the Mixed Reality [4] better known as Augmented Reality (AR), for which the founding act can be situated in 1993 [5], is also in full expansion. It attempts to merge the physical and numerical worlds to facilitate the user’s task with special devices and particular interaction techniques (i.e. a physical block controls a numerical block). However, the User Interfaces used on these new mobile and connected devices and their uses [6] are similar to the ones of desktop computers and are often inappropriate for mobile users that must realize several tasks simultaneously like talking with other persons, performing technical equipment maintenance, or visiting tourist spot. Besides, whereas these devices can be sensitive to the environment (GPS, RFID tags detection, etc), they rarely made the user benefit of this. Thus we must adapt their behaviour transparently to the user (in a proactive way) as in an Ambient Intelligence (AmI) Environment. AR devices and techniques can be particularly convenient in this respect. Our objective is to use Ubiquitous Computing and Mixed Reality in learning. To do thus we studied learning situations which could benefit of this approach as well as corresponding learning methods. We found that mastering of domestic or professional appliances is a interesting domain of investigation and we set up HMTD (Help-Me- To-Do) project which objective is to study MOCOCO learning to this domain. In the following sections we will successively describe MOCOCO concepts and IMERA platform and we will discuss M-learning situations and learning methods. Then we will present HMTD foundations privileging industrial situations. We will describe wearable computer configuration process and its use in learning situations. MOCOCO AND ASSOCIATED CONCEPTS Four main concepts are the fundamentals of our approach: MOCOCO - acronym expresses main aspects of our approach. Its objective is to indicate that we are creating for different actors an environment allowing MObility, COntextualisation and COoperation during tasks realization. A mobile actor has access to precise and contextualized data and can collaborate with several other mobile or fix actors to solve the problem. Proactivity - characterizing information propagation to the actors enabled by an Ambient Intelligence Environment and transparent user interface adaptation. CAE (Computer Augmented Environment) - in the meaning of Augmented Reality and Ubiquitous Computing. MoUI (Mobile User Interfaces) - denoting the user interfaces for the wearable computers as those of PDAs, Smartphones, mobile phones and other devices appropriate for mobile users working in a collaborative way with an elaborate contextualization (access to contextual and/or personal precise data) in a CAE. PLATFORM PRESENTATION For our studies we defined IMERA platform (French acronym: Mobile Interactions for Computer Augmented Environment). This platform is composed of a main workplace and three auxiliary distant workspaces. The main working area is a CAE (Computer Augmented Environment) where different actors are moving about. For us, this CAE is a more or less large area covered by a iJIM – International Journal of Interactive Mobile Technologies – www.i-jim.org 1 CONTEXTUAL MOBILE LEARNING – A STEP FURTHER TO MASTERING PROFESSIONAL APPLIANCES WiFi network, able to receive RFID tags, either freely set or integrated to real objects located in this space; RFID technology is our first support for the Ambient Camera Video projector Interactive Table Tracker Pen Eraser Fig.1: User with wearable Fig.2: The Table Gate Fig.3: The Tool Tribe computer Intelligence Environment. Some RFID fixed readers can also be introduced on this area. The actors (Fig.1) are moving freely in this area with their wearable computers (PDAs, TabletPCs, etc), each of them equipped with a WiFi card and an RFID reader. These wearable computers are thus connected to the network and are able to access contextual data through RFID technology and communication with mobile and fix actors. The WiFi network allows actors to be both connected between them and with centralized systems (database servers, etc) so they can communicate and access large amount of data. Independently of this working area, several separate distant management and observation workplaces complete this platform. For our experimentations, we have at our disposal three other workplaces in our lab. A first workspace is intended to be central workplace for observation and management of the collaborative activities involving a coordination, i.e. to supervise the actions made by the actors moving on the platform main workplace. For this purpose, a TableGate equipment (fig.2) [7] is used. It’s an interactive pressure sensitive flat-mounted table supporting Mixed Reality thanks to a video projector and a camera. This device is able to recognize the physical objects placed and shifted on it and can also act as a touch pad. The second workspace, located in another room of our lab, is mainly observation oriented but can be used as second supervision place. It is based on a Tool Tribe device (www.tool-tribe.com), an interactive whiteboard (fig.3) hanged to the wall and completed by a video projector to display numerical data. For example, the video-projector can display the position of the actors on the platform in real-time, a paper map of the platform can cover the panel for that, but a numerical map is also usable. The interactions with the panel are done by physical pens that the system tracks. Some pens are physically writing, whereas others are just used as pointers, so we can select a position, an actor or others objects moreover than to physically write and erase drawings on the panel as on a whiteboard. Main difference between the TableGate and the Tool Tribe is that on the TableGate, the user can manipulate indifferently physical and numerical objects. In this way, the TableGate allows realizing Mixed Reality tasks, and either Augmented Reality (tasks in the Real World) or Augmented Virtuality (tasks in the Numerical World). On the contrary, the Tool Tribe doesn’t allow interacting with real objects; it is used in the same way as a touch screen, to manipulate only virtual numerical objects. The last workspace located in another lab room is devoted to the observation and the evaluation of the platform experimentations. It holds a trace server which acts as a UI message loop hook; filtering and storing all the UI generated messages sent through the different networks Ethernet, WiFi, …) either normally (collaborative applications) or dedicated to this purpose (single user applications). PLATFORM ADAPTATION PROCESS IV. V. IMERA platform is used in several collaborative situations (educational, industrial, cultural and sporting events). Its main working area takes place on the corresponding space while distant workplaces can be located anywhere as soon as a WiFi network is accessible. For each situation, it is important to identify the actors and their tasks with the data to be collected and manipulated for that. We determine in this way the technologies to exhibit on the main working area and the most appropriate equipments for each actor. Firstly scenarios are expressed and formalized in a structured way following the method proposed in [8] to describe as precisely as possible all collaborative aspects. Secondly a synthesis leading to the Collaborative Application Behavior (CAB) model is elaborated. Then, we are able to extract the roles of each actor in analyzing the model from the actors’ point of view; jointly to the required environment, artefacts, etc. This process helps in the choice of wearable computers and peripherals needed to realize the tasks. M-LEARNING SITUATIONS AND METHODS Mobile learning (M-learning) is a new approach of learning using wireless device in e-learning. M-learning is the result of two faced evolution: the development of mobile technologies, including the network and wireless devices, and the evolution of learning theory. There are many definitions for Mobile learning; a significant one is the following: M-learning (Mobile learning) is any sort of learning that happens when the learner is not at a fixed, predetermined location, or learning that happens when the learner takes advantage of the learning opportunities offered by mobile technologies [9]. Without discuss deeply different taxonomies [10] of M-learning we can in this paper only separate M-learning in two categories in relation with the context. Either learning activity is totally independent of location of the actor and the context in which he is evolving taking into account only the opportunity to use mobile device(s) to learn (in public transportation, waiting the bus, …) or at the opposite, learning activity is in relation with the location (physical, geographical or logical) of the actor and the context in which he is evolving. We are naturally mainly concerned by this second category of M-learning. In this way we can also separate learning methods which are for the first category mobility context independent i.e. related only to the learned subject and corresponding learning methods. At the opposite, naturally, learning methods used in situated mobile learning activities are in relation with the context. Global characteristics are “just-in time learning”, “learning by doing” and “learning & doing” which can take various forms: Problem-based learning: [11] defines that the PBL (Problem-based learning) as oriented to development of iJIM – International Journal of Interactive Mobile Technologies – www.i-jim.org 2 http://www.tool-tribe.com/ CONTEXTUAL MOBILE LEARNING – A STEP FURTHER TO MASTERING PROFESSIONAL APPLIANCES problem solving skills as well as the necessity of helping learners to acquire necessary knowledge and skills. Problem Based Learning assists learners to solve problems by the process of solving ill-structured problems on which confronted adults or practicing professionals are daily confronted. Generally, PBL is an example of a collaborative, case-centered and learner-oriented method of learning. Mobilearn project [12] studied deeply this approach and proposed several adaptation of PBL to different application domains [9]. Case-based Learning: [13] use concrete situations, examples, problems or scenarios as a starting point for learning by analogy and abstraction via reflection. However, a new research field relevant to Case-based learning is case-based reasoning (CBR) which aim is machine learning. CBR goal is to utilize the specific knowledge previously experienced, creating concrete problem situations (cases) [14]. A new problem is solved by finding a similar past case, and reusing it in the new problem situation. Case-based reasoning might be the new area for case-based learning of human beings, especially for mobile learning. Because it firstly make the machine learns from the human and then human learns from this structured knowledge, which should be the main approach of all cognitive sciences. Scenario-based Learning (Situated Learning): Scenario- based learning is learning that occurs in a context, situation, or social framework. It is based on the concept of situated cognition, which is the idea that knowledge cannot be known and fully understood independent of its context. Two main principles of this kind of learning are that (1) knowledge needs to be presented in the authentic context, i.e., settings and applications that would normally involve that knowledge; (2) learning requires social interaction and collaboration [15]. HMTD-INDUSTRIAL MAINTENANCE SCENARIO VI. HMTD project objective is to allow to the users to master better domestic, public professional appliances. Main idea is to propose to the user in a precise situation (use, maintenance, diagnostic or repairing) to learn appropriately about the appliance to understand functioning principles and command or other actions [16]. One of industrial scenarios supported by the IMERA platform is the following (fig.4). An engineer in charge of maintenance of industrial machines is called on a factory where such machine is out of order. Once in the factory, he equips himself with See-Through goggles connected to Fig.4: IMERA Platform industrial scenario a WiFi PDA including a RFID reader. By reading machine RFID tags, he gets all its features and its reparation history stored on an internet database server, through an available WiFi access point connected to the internet. He proceeds to a first analysis and try to formulate a diagnostic. At each moment, he can stop his activity and choose to learn about it, i.e. to receive more complete and precise information either about functioning principles or about actions (commands) which he is asked (guided) to execute. If he has complementary questions or if he failed in making the actions alone, he can contact his supervisor or another expert (appliance manufacturer). He can contact him by chat or contextualized email in which machine references are automatically included to avoid typing error and to provide exhaustive information. Then they are trying to produce the diagnostic together. Accurate product plans and guides are at the disposal of the engineer through the internet connection to help him on the recognition of the different pieces. He can visualize them on his see-through goggles whereas he is looking at the machine. Simple vocal commands are enabling him to browse the guides. These commands are captured by his PDA microphone and are processed either on a server, being transferred through WiFi and internet or directly on the PDA, depending on the complexity of the command, and the capabilities of the PDA. If diagnostic is still not successful, he can contact a machine manufacturer expert to help him realize the diagnostic. As soon as the diagnostic is established, and the malfunctioning pieces determined, he highlights them via his wearable computer on a plan of the machine displayed on his augmented goggles. Afterwards, the availability and delay for future reparation is computed. Later, when the parts are delivered, the reparation process is described on his wearable computer with eventually the visualization on his see-through goggles of an assembly plan or others relevant data. As soon as the machine is repaired, he updates the machine reparation history and replaced parts, on the server. A. Choice of wearable computer and its peripherals For different actors of a particular scenario as for different scenarios, it’s important to find the most appropriate wearable computers and peripherals (fig.5). Various solutions are possible (light and small hand free equipment, heavier but with better visualization capacities or better interaction performance …). These choices are iJIM – International Journal of Interactive Mobile Technologies – www.i-jim.org 3 CONTEXTUAL MOBILE LEARNING – A STEP FURTHER TO MASTERING PROFESSIONAL APPLIANCES Fig.5: Goggles with integrated screen, See-Through goggles, TabletPC, RFID reader and Data glove established after a study of all actors’ tasks, matching requirements concerning graphics information complexity (textual, graphic schemas or precise blueprints …), interaction complexity (writing, observation, manipulation) and working conditions (seating, standing, hands availability …). A precise selection process based on a selection space allows comparing different interaction ways and system implementations, with their typical supporting devices organized onto axes and classified for each axis by one of their most relevant characteristics [17]. This process results in different configurations proposals and helps to determine the most convenient ones. The criteria are those of the designer, e.g. the devices number minimization, the interaction continuity maximization (in and between the tasks) and the adequacy with working conditions. Main possible choices are done through the following axes: gesture interaction of the hand, arm and/or head; vocal interaction with or without feedback; eye interaction, also called lazy interaction; writing and input capabilities via a physical or virtual keyboard or a touch screen; display capabilities as screen integrated in glasses or see-through screen in goggles or the screen of a mobile device (PDA, TabletPC, etc); data contextualization; localization of users and objects; communication support as WiFi, and Bluetooth. Contextualization is done by RFID tags reading. The readers are mobile or grounded and the users wear tags to be identified and/or read tags to force their mobile user interfaces become “contextualized” (updated) with the context described in the tag content. This axis doesn’t explicit the data storage, that is often a database server. Localization is geographical (using GPS), logical (using RFID), or combination of both techniques for a better accuracy. Others devices and peripherals aren’t dismissed, these axes are the basis for the definition of our configurations, but they aren’t exhaustive and a configuration defined through them can be modified with other relevant peripherals. Besides, it isn’t mandatory to use each axis, since they are not useful for each task. B. VII. VIII. IX. Examples of meaningful configurations We are describing here three configurations with their purposes. – Hand free highly mobile actor: Purpose: Eyes continuity and at least one hand free. Equipment: Goggles with integrated screen, Control through a data glove, Voice command with vocal feedback, Backpack computer. – Hand free Mixed Reality mobile actor: Purpose: Integration of numerical data in the real world for tasks generally in the real world. Equipment: See-through goggles, Control through a data glove, Voice command with vocal feedback, Backpack computer – Head free mobile actor: Purpose: Sizeable data support and handheld device with interactions by pointing and writing. Equipment: TabletPC (WiFi) with RFID reader. ARCHITECTURAL CONSIDERATIONS Without repeating here all aspects of architectural and process considerations, which can be found in [8], we limit our explanation to the management of learning information. In relation with the nature of task on charge by the user, the learning units are either functional understanding of the appliance oriented or oriented to command, maintenance, and diagnostic or repairing. Connection with the real appliance is either one way (from appliance to HMTD system) to collect concrete information (at least identification, or more complete information about main variables and parameters) or two ways allowing to HMTD system to send commands to the appliance to coach its behavior. By creation of Mix Reality environment it is possible to communicate deeply between appliance and HMTD system. These learning units are of different nature, textual, graphic, simulation, historical collection of data in relation of the kind of operation which is learned (from functioning to repairing). All these units are expressed in XML to allow adaptation. They contain also metadata description respecting LOM. We are also studying SCORM use to be able as easily as possible to adapt these learning units to different platforms which selection process was described previously. TESTS AND EVALUATIONS The aim of the evaluation and test of different configurations of the mobile devices (in the AR and ubiquitous environment) are their utility, utilisability and acceptability. For that, we gather several kinds of traces. Among them, the messages that generate the UI are sent through the network and stored on a trace server; these are either user oriented for single user applications or user and group oriented (messages exchanged inside a group) for collaborative applications. The tests themselves take place in the following manner. Firstly, the subjects’ profiles are determined by asking them to fill a pre-test questionnaire. During the experimentations, the subjects are filmed to supplement the UI logs; they are asked to verbalize their actions and difficulties while two observers follow them and take notes of these problems and attitudes in an observation grid. As soon as the test is finished, each subject fills up a post-test form with a set of multiple choice questions and some open questions. Finally, crossed analyses of these different data allow extracting results of these evaluations. CONCLUSIONS We have presented an approach for contextual learning which is for us not only contextual, but also and mainly iJIM – International Journal of Interactive Mobile Technologies – www.i-jim.org 4 CONTEXTUAL MOBILE LEARNING – A STEP FURTHER TO MASTERING PROFESSIONAL APPLIANCES mobile and collaborative. In this way the choice of appropriate wearable computer characteristics and interaction devices is fundamental in relation with the nature of activity and the context (physical, geographical or logical location, nature of tasks to provide and user preferences). Choice of learning units, their learning methods and associated learning materials is also very important. An experimentation platform for these studies of new interaction techniques and devices and several configurations deployed for collaborative work with mobile actors in a Computer Augmented Environment is now available at Ecole Centrale. This platform is also an Ambient Intelligence Environment by the integration of new communicating objects grounded or mobile, active or passive, and most recent sensors and effectors are considered, including position and orientation sensors or more original captors as presence detectors. The platform supports the appraisal of concrete scenarios issued from industrial maintenance situations (machines on-site repairs, etc), for the discovery and validation of new interaction ways or devices uses. We are open to other applications to validate our approach and other scenarios are currently studied, mainly in the industrial field, especially with our partner Assetium, and some ECL students for their end-of-year works. [7] Chalon R., Réalité Mixte et Travail Collaboratif : IRVO, un modèle de l'Interaction Homme-Machine, Thèse de doctorat, Ecole Centrale de Lyonm, 2004. [8] Delotte O., David B., From Scenarios to Tasks Model for Capillary Systems. Proceedings of HCI International 2005, Las Vegas, USA, July 25-27 2005. [9] O'Maley C. et al. Guidelines for Learning/Teaching/Tutoring in a Mobile Environment, Mobilearn project (www.mobilearn.org), on- line report, 2005. [10] Meyer C., Chalon R., David B., Caractérisation de situations de M-Learning, Proceedings of TICE 2006, France, 2006. [11] Stepien W.J., Problem-based Learning: As Authentic as it Gets, Stepien, W.; & Gallagher, Educational Leadership, 1993, v50 n7 p25-28. [12] Mobilearn, http://www.mobilearn.org [13] Kolodner J.L., Owensby J.N., and Guzdial M., Case-Based Learning Aids. In Jonassen, D.H. (Ed.) Handbook of Research for Educational Communications and Technology, 2nd Ed. Mahwah, NJ: Lawrence Erlbaum Associates, 2004, pp. 829 – 861. [14] Aamodt A., Case-based reasoning: Foundational issues, methodological variations, and system approaches. Agnar Aamodt, Enric Plaza, AICom, Artificial Intelligence Communications, IOS Press, Vol. 7: 1, 1994, pp. 39-59 [15] Lave J., Cognition in Practice: Mind, mathematics, and culture in everyday life. Cambridge University Press, 1988. [16] David B., Masserey G., Champalle O., Chalon R., Olivier Delotte O., A wearable computer based maintenance, diagnosis and repairing activities in Computer Augmented Environment, Proceedings of EAM06 : European Annual Conference on Human Decision-Making and Manual Control, Valenciennes, September 27-29, 2006. [17] Masserey G., Champalle O., David B., Chalon R., Démarche d’aide au choix de dispositifs pour l'ordinateur porté, Proceedings of ERGO’IA 2006, Biarritz, France, 2006. REFERENCES [1] Weiser M., The Computer for the Twenty-First Century, Scientific American, 1991, pp. 94-10. [2] Srivasta L., Ubiquitous Network Societies: The Case of Radio Frequency Identification, Background paper of ITU Workshop on Ubiquitous Network Societies, Geneva, Web: http://www.itu.int/ubiquitous/, 2005. AUTHORS B. T. David, R. Chalon, O. Champalle, G. Masserey and C. Yin are with the laboratory LIESP, Ecole Centrale de Lyon, 36 Avenue Guy de Collongue, 69134 Ecully Cedex, France. (e-mail: Bertrand.David@ec-lyon.fr, Rene.Chalon@ec-lyon.fr, Olivier.Champalle@ec-lyon.fr, Guillaume.Masserey@ec-lyon.fr, Chuantao.Yin@ec- lyon.fr). [3] The Ambience Project, ITEA project on Ambient Intelligence. Website: http://www.hitech-projects.com/euprojects/ambience/, 2004. [4] Renevier P., Nigay L., Salembier P., Pasqualetti L. Systèmes mixtes mobiles et collaboratifs, Colloque sur la mobilité, LORIA, Nancy, France, 2002. [5] Wellner P., Mackay W. and Gold R., Computer Augmented Environments: Back to the Real World. Special Issue of Communications of the ACM, vol. 36, 1993. Manuscript received 14 September 2007. [6] Plouznikoff N., Robert J.-M., Caractéristiques, enjeux et défis de l'informatique portée, In Proceedings of IHM'04, 2004, pp. 125- 132. Published as submitted by the authors. iJIM – International Journal of Interactive Mobile Technologies – www.i-jim.org 5