DOI: 10.3303/CET2290115 Paper Received: 16 November 2021; Revised: 9 March 2022; Accepted: 20 April 2022 Please cite this article as: Wood M.H., Koutelos K., Hailwood M., Cowley C., 2022, Learning lessons from chemical incidents – What’s stopping us and how we can make it happen, Chemical Engineering Transactions, 90, 685-690 DOI:10.3303/CET2290115 CHEMICAL ENGINEERING TRANSACTIONS VOL. 90, 2022 A publication of The Italian Association of Chemical Engineering Online at www.cetjournal.it Guest Editors: Aleš Bernatík, Bruno Fabiano Copyright © 2022, AIDIC Servisi S.r.l. ISBN 978-88-95608-88-4; ISSN 2283-9216 Learning Lessons from Chemical Incidents – What’s Stopping Us and How We Can Make it Happen Maureen Heraty Wood*a, Konstantinos Koutelosb, Mark Hailwoodc, Charles Cowleyd a European Commission Joint Research Centre, Major Accident Hazards Bureau b Independent consultant c LUBW State Institute for Environment Baden-Württemberg d Cranfield University School of Management Maureen.wood@ec.europa.eu While the value of lessons learning is proclaimed far and wide by industry experts, recent accidents in OECD countries put into question the degree to which high hazard industries are using accident information effectively. Lessons learning is a central part of chemical accident risk management because it confronts the reality that individuals, and at a larger scale, organisations, can be blind to the potential for failure in a system. The safety management system (SMS) and the risk management processes, which encompass hazard identification, risk assessment and risk treatment, are the expression of conscious efforts to deal with these vulnerabilities. Insufficient identification of hazards in process design, and underestimation of the risk associated with even the smallest deviations from established standards and procedural norms may have serious and sometimes even fatal impacts. So it is crucial that lessons learned from incidents provide input into the risk analysis process. It is equally important that those involved know how to identify and apply the relevant lessons from the resources available and do so. There is plenty of evidence from recent accidents and studies that lessons available from incidents were not used effectively. While there is an ample supply of chemical accident information within large corporations as well as in the public domain, the accessibility and exploitation of these resources has not necessarily grown. The authors argue that one explanation for failing to learn from past lessons stems from a collective failure of all stakeholders to invest in lessons learning beyond reporting chemical accident investigation findings. The authors further argue that a major reason for this is that the traditional ‘command and control’ form of leadership, prevalent in industry, inhibits organisational learning by taking inadequate account of the operational context and failing to achieve an effective balance between control and adaptation. Recent empirical studies underline the importance of this balance of administrative and adaptive practices for organisational learning to be effective, so that lessons from incidents are embedded into operational reality. The authors propose how such a learning culture can be achieved by employing specific adaptive and enabling leadership practices. 1. Introduction: the problem There is widespread agreement in the hazardous industries that lessons learned are critical to reducing accident risk. The idea is far older than chemical process safety. Biblical reference to history repeating itself is as early as Ecclesiastes 1:9: ‘There is nothing new under the sun’, and it has been taken for granted since the beginning of the industrial age that we need to learn from technological disasters to prevent similar tragedies. In the last century, several countries formalised their commitment to this principle through various legal frameworks, particularly by establishing investigation boards for major industrial disasters. In the wake of the 1984 Bhopal chemical disaster, the chemical industry was forced to seriously reflect whether its current strategies were sufficient to prevent major chemical disasters and gain public trust. Trevor Kletz, a visionary proponent of chemical process safety well before Bhopal, strongly promoted lessons learning as a way forward for the industry. The very first line of his book Organisations Have No Memory, is ‘It might seem to an outsider that industrial accidents occur because we do not know how to prevent them. In fact, they occur because we do not use the knowledge that is available.’ (Kletz, 1993) He then describes a vision of incidents providing essential 685 information on weaknesses threatening system safety and how organisations must find ways to keep the learnings alive in the organisation. Sadly, there is much evidence that this vision has not yet been achieved. In this paper the authors provide a summary of this evidence, what obstacles are inhibiting it, and finally, practical recommendations for how these can be overcome to build a transformative learning culture. 2. Trying to learn lessons that make a difference In organisational learning theory, all learning starts with the acquisition of information. In the context of chemical risk management, collecting information from incidents is the critical first step, followed by processing and storing the information. Processing, in this context analysing incidents, commonly only addresses the specific situation or failure in question (‘the pipe broke’) resulting in so-called ‘single-loop’ learning (‘fix the pipe’). However, more valuable ‘double-loop’ learning may be achieved (Argyris & Schön, 1978, 1996) to address the underlying assumptions, policies and values that may have allowed, or even encouraged, the failure to occur (‘cut maintenance costs’). Clearly, in the interests of reducing risk, double-loop learning would lead an organisation to review how its management and philosophy should be changed to prevent future failures. In recent decades, efforts have been made by industry and government to make these concepts operable for the prevention of industrial accidents, but so far with limited success. Theory of incident causation has developed in distinct phases, from the ‘domino’ accident model (Heinrich, 1936) and the ‘cause-and-effect’ models of ‘Failure Modes and Effects Analysis’ (US Dept of Defense, 1949) and ‘fault tree analysis’ (Watson, 1961) through the ‘behavioural safety’ approach (Krause, 1990) criticised for its potential for blaming workers and its ‘fallacy of mono-causality’ (Hopkins, 2006). All these are single-loop learning approaches. Thinking, then, widened to a more double-loop epidemiological approach taking into account the influence of organisational processes and conditions on human error (Cullen, 1990; Perrow, 1984; Reason, 1990, 1997) and further evolving into a ‘systems’ approach. Three main ‘systems’ accident models have emerged: ‘STAMP’ (Systems-Theoretic Accident Model and Processes) (Leveson, 2004), ‘FRAM’ (Functional Resonance Analysis Method) (Hollnagel & Goteman, 2004) and ‘Accimap’ (Svedung and Rasmussen, 2002). Building on the systems approach, the development of configurational causation models using Qualitative Comparative Analysis (QCA) (Ragin, 1987) offers the possibility of even more real understanding of incident causes (Baumgartner, 2008). The systems approach is now ‘arguably the dominant concept within accident analysis research’ (Underwood and Waterson, 2013, p154). However, despite widespread acceptance of the need for analysis of the whole system, in practice investigations remain focused on ‘contributory factors at the sharp-end of the sociotechnical systems’ (Hulme et al., 2019). Hailwood (2016) summarises the practical implementation of lessons learned from industrial accidents in terms of a four-part process: investigation, reporting of the findings, dissemination and lastly instigation, i.e. implementation of the learning in practice. Indeed, the original Seveso Directive codified the process in 1982, by obliging operators to investigate accidents and report findings to the authorities, who then store the findings in the MARS database (now eMARS) to enable dissemination of lessons by the European Commission. Other databases followed, such as the French ARIA and the German ZEMA databases, both established in the early 1990s, as well as the Japanese Failure Knowledge database in 2001. In the 1990s professional organisations, such as IChemE and CCPS, began regularly publishing lessons learned. However, there are two major difficulties in extracting and making use of the growing amount of information stored in these databases. First, searching and trend analysis is problematic due to the different ways the information is categorised, and second, most of the ‘lessons’ stored derive from the single-loop type of learning. Thus, effective dissemination of lessons remains fragmented, presenting a major barrier to implementation – without which all the other parts of the process are impotent. 3. Lessons learning has not advanced beyond the ‘acquisition’ stage Despite all the information collected in open sources and in internal company databases on chemical accidents, there is considerable evidence that lessons learning mainly consists of reporting. In this sense, there is a failure in the investigation phase in that only single-loop learning is achieved, typically limited to the technical cause, e.g. ‘the pipe broke’ identified as the sole perpetrator of a purely linear chain of events. However, evidence points to a far more serious failure in the dissemination and implementation phases, where even the simplest technical lessons are routinely not implemented, i.e., ‘the pipe keeps breaking’. Recent accidents and industry analyses provide little evidence of real progress towards a transformative lessons learning process. In light of the developments, both in understanding organisational learning and theory of incident causation described earlier, it is surprising and disappointing that investigations of recent notable chemical accidents indicate a failure to apply relevant lessons learned from past incidents that were well-known and widely available. As Table 1 indicates, this failure was a contributing factor in many recent major accidents involving 686 dangerous substances. The lessons learned shown in the table have been cited as causal factors in numerous publicly available incident reports, including those of some very well-known disasters. Moreover, most of them have been widely incorporated into relevant safety norms and standards. Table 1: Some recent major chemical accidents that stemmed from a failure to apply past lessons learned Recent major accident Examples of relevant lessons learned available in publicly available reports of past incidents Iqoxe, Tarragona, Spain (14 Jan 2020) Location of control room Risk assessment of fire hazard associated with ethylene oxide Risk–appropriate fire detection and mitigation systems Lubrizol, Rouen, France (26 Sep 2019) Safe storage conditions for flammable and combustible substances in IBCs Fire detection and mitigation systems for hazardous substances bulk storage BASF, Ludwigshafen, Germany (17 Oct 2016) Hot work procedures Need for sufficient supervision and preparation to avoid contractor errors West Fertilizer, Texas, USA (17 Apr 2013) Risks of contamination of ammonium nitrate Fire detection and mitigation systems for ammonium nitrate A number of incident databases and other resources are readily available, at no cost to interested readers (Hailwood and Gyenes, 2020). However, there are a number of barriers to extracting the lessons learned. Language is an obvious problem where database interrogation is not available in multiple languages. A second barrier is the depth of analysis in accident reports; the linear cause-and effect approach, commonly taken, only identifies technical causes, so the systemic failures and potentially more valuable learnings are missed. If the cause of a leak was reported as a failed seal on a pump, this finding does not answer questions about pump maintenance and inspection, operating practices, or suitability of equipment, and so a deeper, second-loop learning is not identified. Though the depth of analysis may have improved in more recent reports, earlier reports in the database will remain deficient in this aspect. A third barrier to extracting lessons learned is the lack of standardisation in the way the data are structured and can be queried, inhibiting not only extracting information but also comparing outputs from different sources. The barriers to extracting and developing lessons learned from publicly available sources of chemical accident data indicate the need for improvements. Specific improvements could be envisioned in: • Database design, to include larger text content, as well as images, in addition to the traditional highly formatted data fields. • Searchability, with an ultimate goal of a standardised ontology, building on intermediate steps such as increased meta-data tagging, thus allowing the identification of pattern similarity, grouping of accident scenarios, repeated causal chains, etc., in association with equipment, substances, industrial processes, etc. • Development of analytical methods and software tools, to enable not only the extraction of ‘double loop learnings’, but also the presentation of results in a readily understood form, so that actions can be derived from the learnings identified. Recent research (Single et al., 2020) offers potential AI tools for addressing the problems of retrieving information on past accidents by using semantically enriched accident databases. Natural Language Processing methods can extract information and automatically populate this into a predefined ontology structure, allowing causal accident relations to be discovered. Application of AI tools together with QCA across multiple databases may enable even more valuable insights for understanding causation as identifiable configurations of multiple factors. Gordon and Wilkinson (2014) suggest that some organisations may be reluctant to provide accident information, so industry associations and professional organisations may have a greater role to play in the management of incident learning. There are some positive examples of the analysis of multiple individual accident reports to identify causal patterns or operations and substances-specific studies (e.g. the Lessons Learned Bulletin from the EU’s Major Accident Hazards Bureau, the ARIA database of the French Ministry of Ecology, the Loss Prevention Bulletins of IChemE). However, among the many industry associations (e.g. CEFIC, the European Process Safety Centre, Concawe, International Oil and Gas Producers, the Center for Chemical Process Safety), there is limited evidence of systematic accident analysis and lessons learned to identify patterns of causality. 687 Incident investigations within the industry commonly focus on ‘root causes’, a failing that has been called ‘what you look for is what you find’ (Lundberg et al., 2009) and that tends to suffer from ‘hindsight bias’ (Dekker, 2011), limiting learning to single-loop learning ‘rather than to challenge deep assumptions with rigorous and systemic thinking…’, i.e. the aim of double-loop learning (Carroll, 2002). Despite this, useful learning can still come out of internal investigations, e.g. in the form of lessons learned bulletins and formal recommendations. However, this learning fails to get fully implemented often, which is a primary reason for the ‘failure to learn’ within an organisation (Hopkins, 2010), often arising from barriers that stem from the organisational culture. Many of the barriers to learning, summarised in Tables 3 and 4, are seen in organisations suffering major incidents. For example, Schilling and Kluge (2009) describe ‘restrictive and controlling management style’ and ‘status culture’ within outdated management, which are familiar features of the traditional ‘rule-following’ and ‘command and control’ paradigm, prevalent in high hazard industries, along with ‘high level of stress’, ‘lack of time and resources’, ‘fear of punishment’, and ‘blame culture’, which are so often associated with the asymmetric power, referred by Baumard and Starbuck (2005) and Buchanan and Denyer (2013). To achieve real instigation and implementation of learning, and move beyond reporting and dissemination (Hailwood ,2016), it is hence critical to address these barriers to learning. Table 2: Findings from studies of the barriers to organisational learning Study Barriers to Organisational Learning Schilling and Kluge (2009) Integration of new ideas inhibited by rigid/outdated managers’ beliefs or assumptions Institutionalising of learning inhibited by lack of resources or organisational hypocrisy Coopey and Burgoyne (2000) Organisational change constrained by entrenched power structures Baumard and Starbuck (2005) Organisational learning can be achieved but only if top managers are ‘motivated to learn’ Buchanan and Denyer (2013) Organisational learning is a political process shaped by the interpretations and interests of competing stakeholders…’who may seek to ‘protect themselves from scapegoating by producing their own event narratives’. 4. More adaptive leadership practices can help create a learning culture To achieve effective organisational learning from incidents is far from impossible. It does, though, require a cultural shift. Building on recent developments in understanding leadership in the form of practices (Raelin, 2016), together with understanding organisations as, in reality, more complex than simple hierarchies (Uhl-Bien et al., 2007), the empirical study by Cowley et al. (2021) found that relying solely on administrative ‘command and control’ practices impedes learning, and consequently real safety improvements, since these practices tend to over-emphasise simple compliance and to discourage questioning and speaking up. There is strong evidence that learning flourishes only in a climate of psychological safety and mutual trust (Edmondson and Lei, 2014) and that creating such a climate is highly dependent on leadership. Of course, administrative practices of competent compliance with formal procedures are the foundation of safe operation, but for learning to be effective, administrative practices need to be balanced with adaptive practices, such as sense-making, open communication and collaborative problem-solving. Achieving an effective, beneficial entanglement of these two different kinds of practices is difficult, but worth the effort. The resulting combination of both administrative and adaptive practices becomes much more strongly supportive of effective learning and, therefore, better safety outcomes. To get this to happen, there needs to be a clear commitment by the CEO and senior executives to adopt the kinds of leadership practices that encourage learning. They should start by employing more adaptive and enabling leadership practices themselves, aiming to achieve an effective balance with their traditional hierarchical ‘command and control’ administrative leadership, whilst, at the same time, coaching the operational managers to do the same. An explanation of how these two apparently very different approaches, both ‘administrative’ and ‘adaptive’ , can be mutually reinforcing to improve learning and safety, is given in Cowley (2020). This construction builds on ‘Complexity Leadership Theory’ (Uhl-Bien et al., 2007) that describes how administrative and adaptive processes can be effectively entangled by combinations of leadership practices, specifically, directive and managerial ‘administrative’ practices, ‘adaptive’ practices that encourage innovation and learning, and a third kind, ‘enabling’ practices’, such as supporting networks and use of constructive tension, to help the other two kinds operate together. 688 Table 4: Summary of barriers to learning at each part of the learning from incidents process Part of the learning from incident process Barriers to Organisational Learning Investigation Poor gathering of facts, concentrating on visible technical evidence Insufficient depth of investigation (single-loop: only technical causes) Reporting of the accident investigation findings Report based on technical facts; preference for ‘single root cause Lack of discussion of organisational / systemic interaction of causal factors Dissemination Fear of legal consequences hinders distribution outside organisation ‘Command and control’ leadership inhibiting ‘psychological safety’ Instigation / implementation of the learning. Over-emphasis of ‘administrative’ practices inhibiting networks and sharing Insufficient ‘adaptive’ and ‘enabling’ leadership practices that encourage challenging existing processes and supporting sense-making This approach is a departure from a strict hierarchical, rule-based system; however, it does not mean that the rulebook is thrown out completely. Rather, it recognises a need to have accepted ways of operating, and agreed behaviours and responses, and that these norms should be codified in standard operating procedures. However, as both Leveson (2011) and Hollnagel (2014) point out, this approach also recognizes that situations may arise that are outside of the preconceived and regulated set of conditions, and offers a way to address them safely. The approach also values the people involved. It recognises experience and knowledge and that a single person should not be expected to solve unexpected or difficult issues when they arise by themselves. By expanding the human resources available for problem solving, networks of people beyond the organisations boundaries also have a role to play. These networks may be amongst the operators of a particular type of production facility, within the chemical industry more generally, and between individuals as members of professional organisations. By working in collaborative and complementary ways, these networks can make a significant contribution to learning from incidents. In particular, they can provide access (e.g. to other networks, to other industries and stakeholders) to relevant information on incidents, as well as focussing on incidents that are relevant for their particular activity. A decision of particular note in this context was taken by the Institution of Chemical Engineers (IChemE) to make the Loss Prevention Bulletin (LPB), including its archive, free to access online by all members from January 2021. Previously this journal was available only by subscription, and although it was only of moderate cost, it was a hurdle to accessing the content. 5. Conclusions and Recommendations The continued ‘repeat’ incidents, many with serious consequences are evidence of a widespread failure to learn in the high hazard chemical industries. A major cultural shift is needed. Existing barriers to organisational learning can be overcome, but significant sustained effort is needed. In this regard, the following specific actions are recommended: • Learning from incidents must be led from the top and accountability for implementing effective learning should not be delegated to safety specialists. • CEOs and senior executives should commit to adopting more adaptive and enabling leadership practices within their organisations to create the climate of psychological safety necessary for a learning culture, alongside traditional administrative ‘command and control’ leadership practices. • Networks should be developed within and between organisations to build expertise and share it widely. • The use of chemical accident databases should be developed to allow better recording of information and enable the identification of systemic causes that go beyond single-loop learning, and better information sharing to include not only dissemination of accident details but also lessons to be learned and recommended actions to be taken to apply those lessons. To sustain trust in high hazard industries, corporate leaders must demonstrate to the authorities and the public that effective strategies and adequate resources are employed to develop and sustain a lessons learning culture and competence at every site and at every organisational level. References Argyris, C., Schön, A. D. , 1978, Organizational Learning: A Theory of action perspective, Reading, MA, USA: Addison-Wesley Publishing Company. Argyris, C., Schön, A. D., 1996, Organizational Learning II; Theory, Method and Practice, Reading, MA, USA: Addison Wesley. Baumard, P., Starbuck, W. H. 2005, Learning from failures: Why it may not happen, Long Range Planning, 38(3 SPEC. ISS.), 281–298. 689 Baumgartner, M., 2008, Regularity theories reassessed, Philosophia, 36(3), 327–354. Buchanan, D. A., Denyer, D., 2013, Researching tomorrow’s crisis: Methodological innovations and wider implications, International Journal of Management Reviews, 15(2), 205–224. Carroll, J. S., 2002, Learning from experience in high hazard organizations, Research in Organizational Behavior, 24, 87–137. Coopey, J., Burgoyne, J., 2000, Politics and organizational learning, Journal of Management Studies, 37(6), 869–886. Cowley, C. I. , 2020, The paradox of safety - Challenging the current paradigms of organization and leadership in the prevention of disasters from high hazard technology, Dotoral dissertation, Cranfield University. Cowley, C. I., Denyer, D., Kutsch, E., Turnbull-James, K., 2021, Constructing safety: Reconciling error prevention and error management, Oil and Gas and Petrochemicals Operations, Academy of Management Discoveries, doi.org/10.5465/amd.2019.0190. Cullen, L.,1990, The public inquiry into the Piper Alpha Disaster, Vol.2. U. K. Department of Energy, Presented to Parliament by the Secretary of State for Energy by Command of Her Majesty, London, UK: HMSO. www.hse.gov.uk/offshore/piper-alpha-public-inquiry-volume2.pdf, accessed 30 November 2021. Dekker, S., 2011, The criminalization of human error in aviation and healthcare: A review, Safety Science, 49(2), 121–127. Edmondson, A. C., Lei, Z., 2014, Psychological safety: The history, renaissance, and future of an interpersonal construct, Annual Review of Organizational Psychology and Organizational Behavior, 1(1), 23–43. Gordon, B. J., Wilkinson, P., 2014, Should lessons learned be the blueprint for the future?, International Petroleum Technology Conference, doi.org/10.2523/IPTC-17927-MS. Hailwood, M., 2016, Learning from accidents – reporting is not enough, Chemical Engineering Transactions, 48, 709-714 SE-Research Articles. Hailwood, M., Gyenes, Z., 2020, Accident databases – a review, Loss Prevention Bulletin, (275, October), 15–17. Heinrich, H. W.,1936, Industrial Accident Prevention, New York, NY, USA: McGraw Hill. Hollnagel, E., 2014, Safety-I and Safety-II, Aldershot, UK: Ashgate Publishing Limited. Hollnagel, E., Goteman, Ö., 2004, The functional resonance accident model, Proceedings of Cognitive System Engineering in Process Plant, (February), 155–161. Hopkins, A., 2006., What are we to make of safe behaviour programs?, Safety Science, 44(7), 583–597. Hopkins, A., 2010, Failure to Learn: The BP Texas City Refinery Disaster, Sydney, AU: CCH Australis Ltd. Hulme, A., Stanton, N. A., Walker, G. H., Waterson, P., Salmon, P. M., 2019, What do applications of systems thinking accident analysis methods tell us about accident causation? A systematic review of applications between 1990 and 2018, Safety Science, 117(March), 164–183. Kletz, T.,1993, Lessons from Disaster: How Organisations Have No Memory and Accidents Occur, Melksham, UK: Redwood Press Limited (Reprinted 2003, Eastham, UK: Anthony Rowe Limited). Krause, T. , 1990, The Behaviour-Based Safety Process, New York. NY, USA: Van Nostrand Reinhold. Leveson, N., 2004, A new accident model for engineering safer systems, Safety Science, 42(4), 237–270. Leveson, N., 2011, Engineering a Safer World - Systems Thinking Applied to Safety, Cambridge, MA, USA: The MIT Press. Lundberg, J., Rollenhagen, C., Hollnagel, E., 2009, What-you-look-for-is-what-you-find - The consequences of underlying accident models in eight accident investigation manuals, Safety Science, 47(10), 1297–1311. Perrow, C., 1984, Normal Accidents: Living with High-Risk Technologies, Princeton, NJ, USA: Princeton University Press. Raelin,J.A.,2016,Leadership-as-practice :Theory and application -An editor ’s reflection,Leadership,13(2), 215-221 Ragin, C. C., 1987, The Comparative Method. Oakland, CA, USA: UCAL Press. Reason, J., 1990, Human Error, Cambridge, UK: Cambridge University Press. Reason, J. T., 1997, Managing the Risks of Organizational Accidents, Farnham, UK: Ashgate. Schilling, J., Kluge, A., 2009, Barriers to organizational learning: An integration of theory and research, International Journal of Management Reviews, 11(3), 337–360. Single, J. I., Schmidt, J., Denecke, J., 2020, Knowledge acquisition from chemical accident databases using an ontology-based method and natural language processing, Safety Science, 129, 104747. Svedung, I., Rasmussen, J. , 2002, Graphic representation of accident scenarios, Mapping system structure and the causation of accidents, Safety Science, 40(5), 397–417. Uhl-Bien, M., Marion, R., McKelvey, B., 2007, Complexity leadership theory: Shifting leadership from the industrial age to the knowledge era, The Leadership Quarterly, 18, 298–318. Underwood, P., Waterson, P. , 2013, Systemic accident analysis: Examining the gap between research and practice, Accident Analysis and Prevention, 55, 154–164. US Department of Defense, 1949, Procedures for performing a failure mode effect and critical analysis, MIL-P- 1629. Watson, H. A., 1961, Launch Control Safety Study, Vol. 1, Murray Hill, NJ, USA: Bell Telephone Laboratories. 690 lp-2022-abstract-229.pdf Learning Lessons from Chemical Incidents – What’s Stopping Us and How We Can Make it Happen