Microsoft Word - 35fakandu.docx CHEMICAL ENGINEERING TRANSACTIONS VOL. 53, 2016 A publication of The Italian Association of Chemical Engineering Online at www.aidic.it/cet Guest Editors: Valerio Cozzani, Eddy De Rademaeker, Davide Manca Copyright © 2016, AIDIC Servizi S.r.l., ISBN 978-88-95608-44-0; ISSN 2283-9216 Systemising Performance Indicators in the Assessment of Complex Sociotechnical Systems Jorgen Ernstsen*a, Salman Nazira, Bjarte K. Roeda, Davide Manca b aTraining and Assessment Research Group (TARG), Department of Maritime Technology and Innovation, University College of Southeast Norway – Boks 2243, 3103 Tønsberg, Norway. bPSE-Lab, Process Systems Engineering Laboratory, CMIC Department – Politecnico di Milano – Piazza Leonardo da Vinci 32, 20133 Milano, Italy jorgen.ernstsen@hbv.no Complex sociotechnical systems, e.g. in process, nuclear power plant, shipping, are required to perform challenging operations, and comprise a combination of interdependent technical and human elements. The performance assessment of such systems calls for a systematic design to gather and analyse the retrieved data. This involves using precise indicators to ensure that essential system elements are measured accurately. The current work reviews the last decade use of indicators in psychology and engineering research, which are essential scientific domains of sociotechnical systems. The paper introduces a conceptual framework of indicators featuring four categories: (i) individual and abstract, (ii) individual and concrete, (iii) team and abstract, and (iv) team and concrete contents. The framework provides a way of systemising the indicators to provide a consistent foundation for future research towards a systematic performance assessment of sociotechnical systems. It may increase the precision of the performance indicators that is used to assess complex sociotechnical systems. 1. Introduction Cross-disciplinary collaboration is necessary among teams involved in complex human-machine interactions and interdependent relationships in order to succeed with many of today’s difficult operations. According to Vicente (1999), a complex sociotechnical system is defined as one that scores high in the dynamic, social, and technical dimensions, and features efficient interactions among the dimensions. An interaction requires that a person or object must contribute to an overall process in order to achieve a goal (Petersen, 2004), where each element has some implications on the social and technical interplays (Woods and Hollnagel, 2006). The dynamic nature of the market calls for an improvement of operational performance of complex sociotechnical systems by means of proper training and assessment methods to sustain the increasing competition and solve a range of emerging challenges. Nuanced information regarding system performance is necessary to pinpoint where to remedy and supplement a mere comparison of key performance indicators that would only measure the overall operational performance of the system (Cox et al., 2003). Complex operations are often difficult and require diverse skills and problem-solving abilities of the human operator. The toll of unsuccessful operations in high risk industries can yield catastrophic consequences to safety, economy, and environment, as demonstrated in the recent Sewol disaster (Kim, et al., 2016) and the BP Deepwater Horizon disaster. This emphasizes the need to identify how to assess the performance of sociotechnical systems and subsequently train operators to improve the system performance so that operational processes can be carried out safely (Nazir and Manca, 2014), efficiently (Nazir, et al., 2015) and environmentally friendly (Dekker, et al., 2011). A systematic understanding of the various categories of performance indicators is likely to increase the accuracy in the overall assessment through a combination of indicators. As a routine, operator assessments are conducted by Subject Matter Experts (SME), which are prone to subjective bias (Manca et al., 2014), and subsequently compare to the subjective evaluation defined by the SMEs to the technical performance of the system. To remedy this critical issue, operator performance indicators need to reduce the subjective bias by means of more objective data gathering, whereas technical DOI: 10.3303/CET1653032 Please cite this article as: Ernstsen J., Nazir S., Roed B.K., Manca D., 2016, Systemising performance indicators in the assessment of complex sociotechnical systems, Chemical Engineering Transactions, 53, 187-192 DOI: 10.3303/CET1653032 187 performance indicators need to be investigated at the appropriate level of analysis in relation to the operator performance indicators. The correct use of performance indicators is necessary to achieve a valid assessment of the system performance. Objective of this paper is to systemise the performance indicators so that the intrinsic properties of various indicators become evident. Subsequently, this can assist in identification of appropriate considerations related to the various uses of indicators. Operational performance may be measured by its execution (the process) or by its result (the product), and it is often necessary to determine the goal prior of an assessment. In order to increase the performance of an operation, one should measure the process. Indeed, if one wants to compare two different processes, it is often more efficient to focus the assessment on the end results, i.e. the key performance indicators. Common levels of analysis in a system are the individual, the team, the managerial, and the organisational levels (Rasmussen and Svedung, 2000). Performance indicators can be designed to pinpoint different levels of analysis depending on the focus of the assessment (Manca et al., 2012). The fundamental analysis at the individual level and the team level implicate the performance at the higher level of analysis. This paper focuses on performance indicators related to the operational process at the individual and team level of analysis, in line with the overarching goal of much assessment research on complex sociotechnical systems, so to increase the performance of such systems through a systematisation of performance indicators. While acknowledging the need for a holistic approach to system assessment (Vicente, 1999; Hollan, et al., 2000; Dekker, 2002), the focus of the article is to systemise the performance indicators with the purpose of contributing a step towards the development of an objective and holistic approach to systems assessment. 2. Systemising performance indicators Conceptually, any given system is a collection of multiple factors and their relationships, (Hall and Fagen, 1968). A performance indicator measures the level of performance regarding the factor that is investigated within the system. Figure 1 presents an overview of the relationships among indicators, factors, and the system. Figure 1: Relationship among indicators, factors affecting performance in a system, and system performance. I = individual, T =team. 188 Factors within the system may be either concrete - like the temperature indicator of a liquid, or abstract – like communication among field process operators (Gould et al., 2006). Accordingly, each performance indicator, which is required to assess the various factors, must reflect whether the factor being measured is either concrete or abstract. Within a given system, various factors need to be measured simultaneously to contribute with meaningful information about the system. An appropriate method of measuring the performance indicators will require a holistic approach to emphasise the call for measuring multiple factors at the same time so to achieve a valid assessment of the system (Ottino, 2003). A complex system may consist of factors with various measurement difficulties, which need to be addressed. A sociotechnical system may consist of factors that are prone to both subjective and social-facilitation biases during measurement, which are commonly related to the abstract and social components within the given system. It is often difficult to achieve a true measurement of such variables, and they require operationalisation in order to be measureable. Operationalisation is the process of defining an abstract phenomena using tangible factors, e.g. measuring palm sweat as an indicator of stress. On the other hand, indicators that provide performance data of concrete technical components are more easily accessible to the assessor, and often yield less measurement error in the analysis. Current literature on performance indicators revealed similarities that proved to fit within the presented framework, as shown in Figure 2. Individual performance indicators that require operationalisation, i.e. require empirical indicators to gather data (i.e. category 1 in Figure 2) are mostly measuring factors that are abstract, commonly regard the human agents within a system. An example is mental workload, which may be defined as “the portion of operator information processing capacity or resources that is actually required to meet system demands” (Eggemeier et al. 1991, p. 207), commonly measured subjectively or through physiological empirical indications of mental workload. Team performance indicators that require operationalisation (Category 2 in Figure 2) involve measuring latent constructs that explain team performance. Situational correct or relevant communication among team members (e.g. among field and control room operators of a process plant) are examples of team performance indicators (Øvergård et al. 2015). As a routine, they are measured by a subjective evaluation made by SMEs after an experiment or operation. Empirical indicators that measure team performance may be frequency of relevant conversations. Figure 2: Four categories of performance indicators. 189 Individual performance indicators that are easy to access in the system (Category 3 in Figure 2) are often related to concrete operator performance measures. Examples are opening/closing of valves, manual sampling, testing viscosity of liquids, and general supervision of various parameters (e.g. temperatures, pressures, concentrations). Such objective indicators may give valuable information regarding an operator’s task performance. Team performance indicators that do not require operationalisation (Category 4 in Figure 2) are objective measures of a system variable, which is a result of a team effort. The team performance contribution is more than merely aggregating the individual performance score within the team. Examples of objective team-performance indicators may be: resource consumption, i.e. optimal use of a catalyst; correct pressure adjustment; adding correct level of bentonite or brine to a mixture; and general downtime in various operations, i.e. time to start-up a chemical process. All mentioned examples require team collaboration in order to achieve an efficient operation. Objective performance indicators provide the analyst with unambiguous information about a system state, e.g. temperature in gas-separator unit already, and thus do not need operationalisation (as the empirical indicators need) going from objective indicators to a higher-order concrete indicator (please see figure 1). Epistemologically, the paper disregards the notion that perception in itself is dependent on the eye of the beholder, i.e. the human agent arguable influences the true objectivity of an objective indicator. However, such indicators often automatically generate performance data regarding the technical system, as opposed to abstract constructs that require empirical indicators. Objective indicators are technical oriented and thus operate independently in providing the assessor with performance data, thus they are not influenced by the subjective and social facilitation bias. This is in contrast to abstract constructs that require additional considerations in order to measure them. To access performance data, it is necessary to operationalise the performance indicator and attach empirical indicators to the latent construct in order to explain the variance of intangible constructs. Empirical indicators can be generally considered as tools to measure concrete data that are thought to reflect the abstract factor, e.g. using palm sweat to determine an operator’s mental workload (Jacobs et al., 1994). The current investigation of performance indicators involved the systemisation of the scientific literature from both the psychology and engineering domains. The investigation was primarily focused within the maritime sector; however, knowledge about performance indicators that are transferable across the sector of aviation, railroad, road transport, offshore, nuclear and chemical industries are also considered in the analysis. By identifying various performance indicators that are used in engineering and psychology domains, four different categories of performance indicators at the individual- and team level of a system analysis (Rasmussen & Svedung, 2000) emerge. The conceptual framework represents four categories of performance indicators identified in the review: team performance indicators that require empirical indicators, team performance indicators that do not require empirical indicators, individual performance indicators that require empirical indicators, and individual performance indicators that do not require empirical indicators, as shown in Figure 2. 3. Theoretical and practical benefits of the proposed framework The competitive nature of the dynamic market in high-risk industries, e.g. process, nuclear power plant, shipping, requires enhanced operational performance. Key performance indicators regarding the overall economic, safety, and environmental performance of a system allow accounting for the individual contributions to the end product (Banda, et al., 2016). However, strategic-oriented key performance indicators provide insufficient information to conclude which aspects of the system work well and what need some improvements. Nuanced information regarding the system process performance will pinpoint the need for improvement. To achieve this, a set of performance indicators needs to be designed so as to be complimentary to the fundamental levels of a complex operations. To achieve nuanced information about a system, the various levels of complex sociotechnical systems need to be assessed. An important level of analysis are teams in complex operations. To have successful complex operations, a system should be designed around effective and efficient teamwork. It is, however, difficult to assess teamwork as the compilation of parameters, which need to be included, exceeds the requirements associated with an individual operator assessment. It is also important to capture the positive contribution of the synergistic effect and the negative contribution of coordination loss that exists when working in teams. Synergy and coordination loss are inevitable properties in teamwork. This is reflected in the systematisation of the performance indicators within categories 2 and 4 in the framework of Figure 2, where team performance indicators, abstract (category 2 in Figure 2) or concrete (category 4 in Figure 2), will encompass aspects of synergy and coordination loss. This systematisation makes the intrinsic properties of team performance indicators evident so that the assessment design may generate an optimal combination of performance indicators with less diligence. 190 Individual level of measurement is also important in the assessment of complex sociotechnical systems. Effects generated by human operators need also to be measured in order to account for individual contributions to system performance. Human performance varies within and across individuals, and it is often difficult to measure as the variation is often abstract and the same performance effect may have already been accounted for by another indicator within the system, e.g., by measuring stress level within operators and overall team process performance. Indeed, stress and overall team performance may be expected to correlate. Disregarding a proper systematisation of indicators may result in over-weighing of some aspects within the system, merely by counting the same performance contribution twice. In the framework, category 1 and 3 (In Figure 2) concern individual performance indicators in complex operations. Performance indicators within these categories will respectively focus on abstract (Category 1 in Figure 2) and concrete (Category 3 in Figure 2) aspects of the individual operator. Again, systematisation of the indicators in the proposed framework makes the indicators easy to identify and use when developing assessment programs. Individual performance indicators need to neglect the performance score generated by team synergy and coordination loss (Ingham et al., 1974; Kozlowski and Ilgen, 2006) (e.g. time to re-start a chemical process) as the properties are already counted within various team performance indicators. An efficient interaction between humans and machines is also necessary to achieve successful operations. A proper handling of system’s technical components requires some skills. As the performance of an operation is dependent on the level of skills within the workforce, attention must be given to the system’s overall skill level. Indicators that measure the interaction between humans and machines need to be accounted for within the team performance indicators. Knowledge about the indicators may increase the precision when assessing teams and individuals in complex sociotechnical systems. The depicted framework (Figure 2) systemises performance indicators that are used in the assessment of these systems in four categories. Attention should be emphasised on the use of performance indicators in assessing operational performance in complex sociotechnical systems. By acknowledging the various intrinsic properties of the different performance indicators, a more valid comparison of the performance data may be conducted to reveal meaningful information about the system performance. A framework that systemises the indicators may also make evident the strengths and weaknesses of the performance indicators related to the different categories, as the indicators within each category are expected to be similar in nature and thus prone to the same strengths and weaknesses. This is surmised to increase the awareness of each group of performance indicators at the individual and team level of analysis. Such necessary considerations evolve around the properties of the performance indicator. Social and abstract indicators are commonly measured by using subjective performance feedback from the operators, which are liable of social biases such as a desirability to rate self-performance higher than actual performance. At the same time, technical performance measures depend on the human experience of the agent/trainer/manager that reads and interprets the presented information. A framework that makes the respective properties of the performance indicators more evident, may aid the design of assessing complex sociotechnical systems. The conceptual framework presents a way to systemise performance indicators in order to increase the correct use of individual and team performance indicators. The framework may also provide a foundation for further theoretical improvement about the use of performance indicators in the assessment of complex sociotechnical systems. The components within the framework are considered in relation to various system performance indicators such as efficiency, safety, and environmental performance indicators. It may yield practical implications for the use of system performance measures. In addition, the presented framework may assist in designing alarm systems in complex operations, e.g. abstract alarms contribute a higher prioritisation score than concrete alarms as they require more operator assessment in order to ensure proper alarm handling. The framework operates in the boundary of a holistic approach to assess complex sociotechnical systems. Even though the performance measures are identified and systemised in various categories, the design of the overall performance assessment of the system needs careful attention where all levels of the systems are considered: the individual, team, organisational, and system measures. The conceptual framework provides a systematisation of performance indicators at the individual and team level of analysis. Further understanding of performance indicators at the organisational and system level is required to be included in an overarching model of performance indicators of complex sociotechnical systems. 4. Conclusion This paper presented a way to systemise performance indicators aimed at achieving a valid assessment of complex sociotechnical systems. A conceptual framework (Figure 2) was introduced to systemise performance indicators at the individual and team level of analysis, contributing a step forward to overall performance improvement in complex operations. This work paves the way towards a systematic development 191 of individual and team performance indicators and the research on subjective and objective assessments of complex sociotechnical systems. Reference Banda, O. A., Hanninnen, M., Lappalainen, J., Kujala, P., & Goerlandt, F. (2016). A method for extracting key performance indicators from maritime safety management norms. Journal of maritime affairs, 1-29. Doi 10.1007/s13437-015-0095-z Cox, R. F., Issa, R. R. A., Ahrens, Dar. (2003). Management’s perception of key performance indicators for construction. Journal of Construction Engineering and Management, 129(2), 142. Dekker, R., Bloemhof, J., Mallidis, I. (2011). Operations research for green logistics – an overview of aspects, issues, contributions and challenges. European Journal of Operational Research, 219, 671-679. doi: 10.1016/j.ejor.2011.11.010. Dekker, S. W. A. (2002). The field guide to human error investigations. Ashgate. Eggemeier, F.T., Wilson, G.F., et al. (1991). Workload assessment in multi-task environments. Multiple task performance. D.L. Damos. London, GB, Taylor & Francis, Ltd.: 207-216. Gould, K. S., Røed, B. K., Koefoed, V. F., Bridger, R. S., Moen, B. E. (2006). Performance shaping factors associated with navigation accidents in the Royal Norwegian Navy. Military Psychology, 18, 111-129. Hall, A. D., & Fagen, R. E. (1968). Definition of system. Organizations, 1, 31-43. Hollan, J., Hutchins, E., & Kirsh, D. (2000). Distributed Cognition: Toward a New Foundation for Human- Computer Interaction Research. ACM Transactions on Human-Computer Interaction, 42(1), 174-196. Ingham, A. G., Levinger, G., Graves, J., & Peckham, V. (1974). The Ringelmann effect: Studies of group size and group performance. Journal of experimental social psychology, 10, 371-384. Jacobs S. C., Friedman R., Parker J. D., Tofler G. H., Jimenez A. H., Muller J. E., Benson H. et al. (1994). Use of skin conductance changes during mental stress testing as an index of autonomic arousal in cardiovascular research. American Heart Journal, 128(6), 1171-1177. Kim, T., Nazir, S., & Øvergård, K. I. (2016). A STAMP-based causal analysis of the Korean Sewol ferry accident. Safety Science, 83, 93-101. Kozlowski, S. W. J., & Ilgen, D. R. (2006). Enhancing the effectiveness of work groups and teams. Psychological Science in the Public Interest, 7(3), 77-124. Manca, D., Nazir, S., & Colombo, S. (2012). Performance indicators for training assessment of control-room operators. Chemical Engineering, 26. Nazir, S., Manca, D., 2014, How a plant simulator can improve industrial safety. Process Safety Progress. doi: 10.1002/prs.11714 Nazir, S., Sorensen, L. J., Øvergård, K. I., Manca, D., (2015), Impact of training methods on Distributed Situation Awareness of industrial operators. Safety Science, 73, 136-145. Doi: 10.1016/j.ssci.2014.11.015. Ottino, J. M., (2003), Complex systems. AIChE Journal, 49, 292-299. doi: 10.1002/aic.690490202 Øvergård, K. I., Nielsen, A. R., Nazir, S., & Sorensen, L. J. (2015). Assessing Navigational Teamwork Through the Situational Correctness and Relevance of Communication. Procedia Manufacturing, 3, 2589-2596. doi: 10.1016/j.promfg.2015.07.579 Petersen, J. (2004). Control situations in supervisory control. Cognition, Technology and Work, 6, 266-274. Rasmussen, J., Svedung, I. (2000). Proactive risk management in a dynamic society. Swedish Rescue Services Agency. Karlstad, Sweden. Vicente, K. J. (1999). Cognitive Work Analysis. Toward Safe, Productive, and Healthy Computer-Based Work. Mahwah, NJ: Lawrence Erlbaum Associates. Woods, D. D., & Hollnagel, E. (2006). Joint Cognitive Systems - patterns in cognitive systems engineering. New York: Taylor & Francis. 192