Plane Thermoelastic Waves in Infinite Half-Space Caused FACTA UNIVERSITATIS Series: Mechanical Engineering Vol. 12, N o 3, 2014, pp. 251 - 260 1SITUATION ASSESSMENT THROUGH MULTI-MODAL SENSING OF DYNAMIC ENVIRONMENTS TO SUPPORT COGNITIVE ROBOT CONTROL UDC 681.5 Atta Badii, Ali Khan, Rajkumar Raval, Hamid Oudi, Ricardo Ayora, Wasiq Khan, Amine Jaidi, Nagarajan Viswanathan Intelligent Systems Research Laboratory, School of Computer Science and Electronic Engineering, University of Reading, United Kingdom Abstract. Awareness of emerging situations in a dynamic operational environment of a robotic assistive device is an essential capability of such a cognitive system, based on its effective and efficient assessment of the prevailing situation. This allows the system to interact with the environment in a sensible (semi)autonomous / pro-active manner without the need for frequent interventions from a supervisor. In this paper, we report a novel generic Situation Assessment Architecture for robotic systems directly assisting humans as developed in the CORBYS project. This paper presents the overall architecture for situation assessment and its application in proof-of-concept Demonstrators as developed and validated within the CORBYS project. These include a robotic human follower and a mobile gait rehabilitation robotic system. We present an overview of the structure and functionality of the Situation Assessment Architecture for robotic systems with results and observations as collected from initial validation on the two CORBYS Demonstrators. Key Words: Situation Assessment, Cognitive Control, Dynamic Environments, Human- Robot Interaction, Human-Robot Co-Working and Mixed-Initiative Taking 1. INTRODUCTION Situation assessment is that capability of a system, which provides for an awareness of emerging situations. In a robotic environment, this typically includes updates on the relevant static and dynamic states of entities (objects, persons, spaces) with which the robot may or may not be interacting at the time [1-5]. This assessment allows the robotic system to interact with the environment in a sensible manner without the need for the intervention of a (human) supervisor. An innovative generic Situation Assessment Architecture for robotic systems has Received October 23, 2014 / Accepted November 25, 2014 Corresponding author: Atta Badii School of Systems Engineering, University of Reading, Whiteknights, JJ Thomson Building, RG6 6AY, UK E-mail: atta.badii@reading.ac.uk Original scientific paper 252 A. BADII, A. KHAN, R. RAVAL, H. OUDI, R. AYORA, W. KHAN, A. JAIDI, N. VISWANATHAN been developed as part of the European CORBYS project (EC FP7) [10]. This paper presents the overall architecture for this situation assessment capability and its application in proof-of- concept Demonstrators as developed and validated in the CORBYS project. These Demonstrators include a gait rehabilitation robotic system and a robotic human follower (henceforth also referred to as demonstrator I and demonstrator II respectively). The robotic human follower (CORBYS demonstrator II) is an existing mobile robot that has been modified using the CORBYS system to autonomously follow a human co- worker in exploratory settings. The mobile platform is equipped with sensors for perception of the environment including perception of the states and behaviour of humans in the ambient space. Thus the CORBYS cognitive modules anticipate human behaviour in the environment and create appropriate inputs to the low-level controls so as to enable the robot to follow the human co-worker whilst avoiding obstacles. The mobile gait rehabilitation robotic system (demonstrator I) represents another very challenging application domain that has been developed as part of the CORBYS project. This consists of a mobile platform which hosts a powered robotic orthosis. The mobile platform facilitates mobility whereas the powered robotic orthosis assists a patient with their locomotion. The Situation Assessment Architecture interprets the state and effort of the patient including the physical and psychological state. This interpretation is used to create appropriate commands for robot control adaptation. The CORBYS architecture enables this gait rehabilitation system to optimally support the rehabilitation requirements of the patient at different stages in a range of gait disorders. 2. SITUATION ASSESSMENT ARCHITECTURE The overall CORBYS system is comprised of four layers; the physical layer is at the bottom of the stack; this consists of the sensors and actuators that are plugged in to the CORBYS architecture, based on the specific application domain. The logical layers include the cognitive, executive and control layers. It is the cognitive layer where the Situation Assessment Architecture resides. This architecture, the main focus of this paper, endows the robotic system with the cognitive capabilities by assessing the current states of the system and the environment (including humans) to inform decision making responsive to emerging situations by generating high-level inputs for the control system. 2.1. Building situation awareness We define situation awareness as having knowledge about a particular situation. Our architecture interprets data provided by the sensors and actuators coming from a robotic system to create such awareness. There are several factors a robot may need to be made aware of in an environment, therefore, we follow a suitably scoped but extensible approach in terms of cardinality of the states to be assessed using ontologies and formalise the definitions of context and situation in CORBYS. Our definition of context for robotic applications is centred on the location of the robot in an environment and the entities (mainly objects but also co-workers) existing in such an environment and whose dynamic states would need to be continuously monitored and assessed. On the other hand, a situation is defined by events happening in a specific context. Situation Assessment through Multi-Modal Sensing of Dynamic Environments to Support Cognitive Robot 253 The context for a robot is the set of those predicates related to the spatio-temporally specific state vectors of the robot itself and the entities in its operational space relevant to the robot interactions in the recent past, present or immediate future. This includes all self-states such as space, position, place, location, resources, and the states of persons and objects with which the robot is interacting. The notion of context is a hierarchical (multi- layered) abstraction space and its formulation and expression are strongly ontologically committed and have to be efficiently structured and de-limited to provide only the necessary and sufficient situation parametric values within a spatio-temporally logical analysis window so as to avoid computationally prohibitive complexity. We build a context by aggregating and interpreting the data coming from different sensors and actuators. Therefore any information sensed from an environment becomes part of the context. A situation, on the other hand, is time and event critical and is highly dependent on the context. In other words, a situation relies on the current context and is defined by the co-occurrence of particular events happening over a temporal analysis window. In CORBYS, the gait rehabilitation robot uses the CORBYS cognitive architecture to control a powered orthosis to help patients with walking difficulties to overcome some of the challenges posed by their particular gait impairments. For this case, we analyse a patient’s gait to identify their specific locomotion difficulties as encountered in particular gait phases. Fig. 1 The CORBYS Situation Assessment Architecture 254 A. BADII, A. KHAN, R. RAVAL, H. OUDI, R. AYORA, W. KHAN, A. JAIDI, N. VISWANATHAN The results produced from the gait pattern analysis are then correlated with physiological data to track progress and state. The second CORBYS Demonstrator is required to navigate in an environment and take deliberate actions responsive to its sensory input to detect obstacles and assert their presence to facilitate route planning and decision making. For Demonstrator I, the location context remains the same given that the mobile robotic system with the powered orthosis will allow the patient to move about in a big hall space, however, the system and the human will not leave that space to enter a different location, e.g., an adjacent room. Thus the focus in this Demonstrator is on the representation of the human context that is interacting with the robot. This is based on the sensed gait motion, and the psycho-physiological states. Given that each CORBYS Demonstrator is intended for a different application domain; these would eventually come with different data requirements therefore we have provided for a flexible way to import new ontologies that can be domain-specific to extend the current general robot ontology. In this case although a context will reference something different in both Demonstrators given the distinct application domains, the logical foundation, and the definitions remain re-usable throughout. The graphs which contain the built context and situations are then stored in a triple store which works as a long-term memory of experiences (lived experiences of situations created by the occurrence of events) and encounters (objects detected in an environment) by the robot. Saving this data in memory, stored as a time- stamped context cache, is crucial as it can be used later by the robot for offline processing or by the researcher for debugging and further research. The ontology works as a logical schema for the data that needs to be represented in a semantic graph, as well as provide a general definition in description-logic to support the dynamic binding and expression of contexts and situations as required by the Demonstrators in this project. Thus in CORBYS, having the data in such a semantic graph structure supports the process of coupling the inference information provided by the perception and comprehension modules with contexts and the identified situations. We start building contexts after identifying patterns of interest to give the robot an understanding of its environment to know which situations to expect or which ones are more likely to occur. The context built based on the logical definition in our ontology couples situations that occur over particular temporal analysis windows of interest. Fig. 1 illustrates the final implemented version of the Situation Assessment Architecture. The preceptors and state integrators process the raw device data to arrive at semantic knowledge upon which the context builder operates. This is used to arrive at the context and situations (that may exist at a given point in time) upon which to base the decision regarding environment impact. 2.2. Framework architecture The modular Situation Assessment and Decision Sup-port System provides a hybrid architecture integrating machine learning and semantic technologies to enhance the robotic perception of low level (weak) data signals. It organises information as taxonomy of concepts conforming to knowledge as perceived and understood by humans as such remains open to integrating human-in-the-loop intervention if desired. Therefore we make use of the decision support system to make inferences based on the information available for each situation and context. Situation Assessment through Multi-Modal Sensing of Dynamic Environments to Support Cognitive Robot 255 Fig. 2 The situation assessment schema for the robotic follower in the CORBYS project The sensing modules or preceptors deal with processing sensory data signals. We identify patterns and information of interest using machine learning and data analysis to identify contexts. Together with the state integrators, these build contexts from data acquired from sensors and select areas of interest based on the sought after patterns in data. The contexts are then persisted in memory and attached to situations where particular events of interest happen. The decision engine makes decisions based on multiple selected criteria given the situation at hand. The decision takes the form of an action that can be performed, e.g., stopping to avoid an obstacle, or adapting the orthotic actuation to better the (intended) movements of a patient in-session. Although the two CORBYS Demonstrators are different in their application, they share similarities in their data-driven modes of reasoning and the need to acquire ambient information to update the context and thus the situation assessment upon which to take the best action after careful analysis of the data and assessment of the situations at hand. The situation assessment schema with Demonstrator II as an example case can be seen in Fig. 2. Perception of the environment takes place based on the sensory data, followed by fusion in the comprehension phase, which allows for building of relations between perceived objects through fusion and inference to arrive at a richer context. This facilitates appropriate decision making which leads to an action execution. This is then fed back into the loop as it impacts the environment which is being sensed continuously. 256 A. BADII, A. KHAN, R. RAVAL, H. OUDI, R. AYORA, W. KHAN, A. JAIDI, N. VISWANATHAN 2.3. The human co-worker’s states pattern space One of the facets of the ambient pattern space as monitored by the system to decide what actions to take (e.g., adapting the current gait actuation level) is the psycho- physiological state vector describing the person’s relevant conditions. This is built using heart rate, skin temperature, perspiration, posture and activity level. The applicability and usability of this facet span both CORBYS demonstrators. In demonstrator II, the robotic follower benefits from an awareness of the interacting human’s physiological states in scenarios such as hazardous environment exploration. In demonstrator I, a brain computer interface is also employed, which processes Electroencephalography (EEG) signals to detect intention of motion and attention to motion of the person undergoing gait therapy. This is particularly relevant in application domains requiring physical companion support responsive to (imminent) ambient conditions as is the case in gait rehabilitation. The architecture registers the time-stamped intended motion by the patient as detected through the BCI module; such intended motion information is used to decide on the initial level for the patient’s supportive orthotic actuation. 2.4. Static and dynamic activity recognition Acceleration and rotation from an inertial measurement unit are used to detect static and dynamic activities including standing, walking, turning left and right. Fig. 3 Classification of static (standing) and dynamic (walking, turning) activities from inertial measurements (accelerometer and gyroscope): a) Raw acceleration tri-axial signal (forward progression along the z-axis, all three axes are highly correlated), b) raw x-axis rotational signal from gyroscope, c) ARATG feature extracted from gyroscope signal – turning activities can be distinguished from other activities (turning direction left for upward peaks and right for downward peaks, d) Expectation-Maximisation clustering results show linearly separable data space suited to a multi-class support vector machine (without kernel) Situation Assessment through Multi-Modal Sensing of Dynamic Environments to Support Cognitive Robot 257 Activity classification is done using a multi-class support vector machine with features including the Euclidean distance for walking and standing activities and the Average Rotation Angles related to Gravity direction [6] – also known as ARATG – which simplifies the identification of left and right turns (see Fig. 3). A classification accuracy of 88.73% with an average detection latency of 34µs is achieved in 10-fold cross validation. This facet also applies to both CORBYS demonstrators. The robotic follower becomes aware of the acceleration and orientation of the interacting human through this processing of the inertial measurements. In the robotic gait rehabilitation system, this information allows suitable adaptations in the gait trajectories depending on the activity (for instance, starting and stopping). 2.5. Gait analysis (in CORBYS Demonstrator I) It is the gait deviation analysis based on joint angles data that provides the information required for the gait rehabilitation system to make inferences regarding what supportive actuation needs to be applied at which point in the walk cycle to improve the person’s gait as best suited to the condition consistent with the clinical judgement of the gait therapist. Fig. 4 Hip joint angles (X-axis / sagittal movement) for one pathological gait cycle as extracted and segmented per phase by our gait analysis method. Muscle activity in EMG signal is also shown for the lower limb muscle groups: Quadriceps, Hamstring, Tibialis and Calf. Top half of figure corresponds to the left leg while the bottom half depicts motion in the right leg. 258 A. BADII, A. KHAN, R. RAVAL, H. OUDI, R. AYORA, W. KHAN, A. JAIDI, N. VISWANATHAN A range of machine learning and classification techniques were evaluated with respect to the accuracy of their results in classifying the phases in a gait cycle based on the joint angles data. These included ANNs, Naïve Bayes, SVMs as well as unsupervised methods including Expectation Maximisation and K Means using a joint angles dataset that comprised of gait trajectories for the hip, knee and ankle joints of both legs. Trained classifiers were used to determine the optimal gait support that had to be provided to the patient as prescribed by the gait therapist based on clinical observation of the patients’ pathological gait. We also undertook phase classification (i.e. the determination of the walk cycle phase transition points) based on time domain processing as well as frequency domain spectral analysis of data which consisted of a number of statistical/time-domain features extracted from joint-angular motion data of the gait cycle. Phase classification enabled the determination of the deviation in a patient’s gait from the optimal reference gait pattern at the sub-phase level. This approach has been tested with normal and pathological data demonstrating its suitability in classifying the phases of the gait regardless of type. Gait analysis also entails muscle activation pattern detection and tracking against expected activation per previously reported studies [7]. The EMG signal enables the detection of muscle activity orchestration per gait cycle which enables the determination of the expected muscle activation pattern (via a confidence measure) as exhibited in normal locomotion. The CORBYS system provides sensors for four muscles groups with electrodes placed on the Vastus Medialis (quadriceps), Biceps Femoris (hamstrings), Tibialis Anterior (anterior shank) and Gastrocnemius (calf). In Fig. 4, the trough (global minimum in this case) in sagittal movement (along the x-axis) of the right hip joint represents the start of the swing phase for the right leg. At this point, the left leg enters into the stance phase providing support for the right leg swing. Conversely, leading up this trough in the right hip x-axis signal, the right leg is in stance phase supporting the swing of the left leg. The muscle activity analysis verifies that the muscle groups are activated consistent with gait motion; this is of interest to the therapist from the rehabilitation point of view. Large-scale evaluation of gait analysis and muscle activity analysis are ongoing with a cohort of user groups as part of the CORBYS project evaluation activities. 2.6. Decision making The decision engine determines the best suited action at each stage in each case based on reasoning over the situation assessment in given context. Dempster-Shafer method for evidence combination [8, 9] is used to combine information from several data sources to give a better overall picture of a system. This is done by combining beliefs given by several sensors or data points into a single set of beliefs. Traditional probabilities are replaced with belief functions that provide a measure of certainty, for instance, regarding a classification label given to a particular instance. By combining several of these beliefs from different observers or sensors, we can be much more certain of the class label assigned. Dempster Shafer theory assigns a belief to each possible combination of states in a system. So a system with two states {a, b} will have values for {a}, {b], {a AND b} and {}, the empty set. This is known as the power set of {a, b}. For a system with S unique states, there are 2 S belief assignments made by the system. If a combination of states is impossible, it is assigned a belief of 0. The empty set is also usually assigned a belief of 0. The Dempster-Shafer rule can be applied recursively without the combination Situation Assessment through Multi-Modal Sensing of Dynamic Environments to Support Cognitive Robot 259 order affecting the final results. Thus to combine, for instance, A, B and C, we could first apply the rule to the masses of A and B, and then combine this result with the mass of C. We implemented the Dempster-Shafer theory of evidence combination to fuse the various states data inputs from the pre-processing modules to determine a current context and situation classification at any point in the gait cycle as is required for CORBYS Demonstrator I. However, this implementation can be used to fuse any attributes or properties. 2.7. Hardware-Based Reflexive Behaviour Adaptation An important capability of this architecture is the hard-ware-based Reflexive Module (RM) implemented with Field Programmable Gate Arrays (FPGA). This module resides on the control layer of the architecture and monitors sensory data at high speeds to intervene at any point as needed to ensure safety protection. The Situation Awareness Module provides updates to the reflexive behaviour adaptation module and thus to the FPGA reflexive layer to enable real-time response to emerging safety situations. In Demonstrator II, the RM realises the real-time reflexive responses of the robotic follower by implementing low level reactive algorithms in the Core Module (e.g. obstacle detection). Laser scanner data from an artificial rectangular area in front of the robot is processed by the RM module to detect obstacles in the path of the robotic follower and to ensure the safety of the robot by stopping immediately before hitting the obstacle. This is also applicable in other real-time safety protection contexts. For gait rehabilitation support (in Demonstrator I), RM realises the real-time reflexive responses by monitoring sensors and actuators in real-time at high-speed in FPGA. It detects (emerging) unsafe situations that could potentially harm the patient/Demonstrator. Upon detecting such a situation, the RM module cuts the power supply to the gait rehabilitator. The RM carries out detection of unsafe situations using Complex Event Processing (CEP). 3. CONCLUSION In this paper, we have described the design and implementation of the CORBYS Situation Assessment Architecture. This has included the various processing modules such as activity recognition, person states integration, gait analysis including sub-phase classification and deviation calculation, muscle activation analysis, overall context building and decision making regarding the assistive-remedial actuation that needs to be applied. All of the above modules have been implemented and tested individually with datasets made available over the course of the project. Beyond the integrated conformance testing and performance evaluation of the modules of the Situation Assessment Architecture, the larger scale evaluations are on-going. This includes decision engine performance evaluation with all semantic information available from the pre- processing modules determining the next best assistive-remedial actions for gait support. Acknowledgement: This research was supported by the European Commission as part of the CORBYS (Cognitive Control Framework for Robotic Systems) project under contract FP7 ICT-270219. 260 A. BADII, A. KHAN, R. RAVAL, H. OUDI, R. AYORA, W. KHAN, A. JAIDI, N. VISWANATHAN REFERENCES 1. M. Endsley, 2000, Theoretical underpinnings of Situation Awareness: a critical review, Situation Awareness Analysis and Measurement, Mahwah, NJ. 2. Blasch, E & S. Plano, 2002, JDL Level 5 fusion model: user refinement issues and applications in group tracking, SPIE Vol. 4729, Aerosense, pp. 270 – 279. 3. Wahde, M., 2009, A general-purpose method for decision-making in autonomous robots. IEA/AIE, Vol. 5579 of Lecture Notes in Computer Science. 4. Ye, J., Dobson, S., McKeever, S., 2011, Situation identification techniques in pervasive computing, Pervasive & Mobile computing. 5. Boytsov, A. & Zaslavsky, A., 2011, From Sensory Data to Situation Awareness – Enhanced Context Spaces Theory Approach. 9th IEEE Int. Conf. Dependable, Autonomic and Secure Computing. 6. Zhang, M. & Sawchuk, A.A., 2011, A Feature Selection-Based Frame-work for Human Activity Recognition Using Wearable Multimodal Sensors, Proc. Int. Conf. Body Area Networks, Beijing. 7. Vaughan, C.L., Davis, B.L., O’Connor, J.C., 1992, Dynamics of Human Gait , 2nd ed., Kiboho Publishers South Africa. 8. Shafer, G., 1976, A Mathematical Theory of Evidence, Princeton University Press. 9. Dempster, A. P., 1967, Upper and lower probabilities induced by a multivalued mapping, The Annals of Mathematical Statistics 38 (2), pp. 325–339. 10. CORBYS project, 2014. CORBYS. [online] Available at: http://www.corbys.eu (accessed 10. Nov. 2014). PROCENA SITUACIJE KROZ MULTIMODALNU DETEKCIJU DINAMIČKIH OKRUŽENJA ZA PODRŠKU KOGNITIVNOG UPRAVLJANJA ROBOTOM Svest o nastalim situacijama u dinamičkom operativnom okruženju robotskog sistema za podršku je ključna osobina takvog kognitivnog sistema bazirana na njegovoj efektivnoj i efikasnoj proceni preovlađujuće situacije. Ovo omogućava sistemu da ostvari interakciju sa okolinom na razuman/semiautonoman/proaktivan način bez potrebe za čestom intervencijom od strane supervizora. U ovom radu je prikazana nova generalna arhitektura za procenu situacije za robotske sisteme koji direktno podržavaju ljude, kao što je sistem razvijen u okviru projekta CORBYS. Ovaj rad predstavlja celokupnu arhitekturu za procenu situacije razvijenu u okviru CORBYS projekta kao i njenu primenu na “proof-of-concept” demonstratorima. Razmatrani demonstratori su mobilni robot za praćenje ljudi i mobilni robotski sistem za rehabilitaciju hoda. Rad predstavlja pregled strukture i funkcionalnosti arhitekture za procenu situacije kao i dobijene rezultate i sakupljene observacije tokom inicijalne validacije dva CORBYS demonstratora. Ključne reči: procena situacije, kognitivno upravljanje, dinamičko okruženje, interakcija robota i čoveka, zajednički rad robota i čoveka i uzajamno preuzimanje inicijative